00:00:00.001 Started by upstream project "autotest-per-patch" build number 130575 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.171 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.296 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.296 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:17.648 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:17.661 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:17.674 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:17.675 > git config core.sparsecheckout # timeout=10 00:00:17.686 > git read-tree -mu HEAD # timeout=10 00:00:17.702 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:17.722 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:17.722 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:17.829 [Pipeline] Start of Pipeline 00:00:17.842 [Pipeline] library 00:00:17.844 Loading library shm_lib@master 00:00:17.844 Library shm_lib@master is cached. Copying from home. 00:00:17.859 [Pipeline] node 00:00:17.866 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:00:17.868 [Pipeline] { 00:00:17.878 [Pipeline] catchError 00:00:17.880 [Pipeline] { 00:00:17.894 [Pipeline] wrap 00:00:17.904 [Pipeline] { 00:00:17.913 [Pipeline] stage 00:00:17.915 [Pipeline] { (Prologue) 00:00:17.935 [Pipeline] echo 00:00:17.937 Node: VM-host-SM9 00:00:17.944 [Pipeline] cleanWs 00:00:17.953 [WS-CLEANUP] Deleting project workspace... 00:00:17.953 [WS-CLEANUP] Deferred wipeout is used... 00:00:17.958 [WS-CLEANUP] done 00:00:18.145 [Pipeline] setCustomBuildProperty 00:00:18.239 [Pipeline] httpRequest 00:00:18.612 [Pipeline] echo 00:00:18.614 Sorcerer 10.211.164.101 is alive 00:00:18.624 [Pipeline] retry 00:00:18.627 [Pipeline] { 00:00:18.641 [Pipeline] httpRequest 00:00:18.645 HttpMethod: GET 00:00:18.646 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:18.647 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:18.668 Response Code: HTTP/1.1 200 OK 00:00:18.669 Success: Status code 200 is in the accepted range: 200,404 00:00:18.669 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:36.828 [Pipeline] } 00:00:36.849 [Pipeline] // retry 00:00:36.857 [Pipeline] sh 00:00:37.134 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:37.148 [Pipeline] httpRequest 00:00:37.544 [Pipeline] echo 00:00:37.546 Sorcerer 10.211.164.101 is alive 00:00:37.556 [Pipeline] retry 00:00:37.558 [Pipeline] { 00:00:37.574 [Pipeline] httpRequest 00:00:37.578 HttpMethod: GET 00:00:37.579 URL: http://10.211.164.101/packages/spdk_f15f2a1dd1a6c323a60501074c1fd68388240dbe.tar.gz 00:00:37.579 Sending request to url: http://10.211.164.101/packages/spdk_f15f2a1dd1a6c323a60501074c1fd68388240dbe.tar.gz 00:00:37.589 Response Code: HTTP/1.1 200 OK 00:00:37.590 Success: Status code 200 is in the accepted range: 200,404 00:00:37.590 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk_f15f2a1dd1a6c323a60501074c1fd68388240dbe.tar.gz 00:01:43.543 [Pipeline] } 00:01:43.562 [Pipeline] // retry 00:01:43.571 [Pipeline] sh 00:01:43.866 + tar --no-same-owner -xf spdk_f15f2a1dd1a6c323a60501074c1fd68388240dbe.tar.gz 00:01:47.160 [Pipeline] sh 00:01:47.440 + git -C spdk log --oneline -n5 00:01:47.440 f15f2a1dd bdev/nvme: controller failover/multipath doc change 00:01:47.440 bb8a22175 bdev/nvme: changed default config to multipath 00:01:47.440 67cfdf5cc bdev/nvme: ctrl config consistency check 00:01:47.440 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:47.440 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:47.455 [Pipeline] writeFile 00:01:47.468 [Pipeline] sh 00:01:47.746 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:47.757 [Pipeline] sh 00:01:48.033 + cat autorun-spdk.conf 00:01:48.033 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.033 SPDK_TEST_NVMF=1 00:01:48.033 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.033 SPDK_TEST_USDT=1 00:01:48.033 SPDK_TEST_NVMF_MDNS=1 00:01:48.033 SPDK_RUN_UBSAN=1 00:01:48.033 NET_TYPE=virt 00:01:48.033 SPDK_JSONRPC_GO_CLIENT=1 00:01:48.033 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.039 RUN_NIGHTLY=0 00:01:48.042 [Pipeline] } 00:01:48.055 [Pipeline] // stage 00:01:48.068 [Pipeline] stage 00:01:48.070 [Pipeline] { (Run VM) 00:01:48.082 [Pipeline] sh 00:01:48.359 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:48.359 + echo 'Start stage prepare_nvme.sh' 00:01:48.359 Start stage prepare_nvme.sh 00:01:48.359 + [[ -n 2 ]] 00:01:48.359 + disk_prefix=ex2 00:01:48.359 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 ]] 00:01:48.359 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/autorun-spdk.conf ]] 00:01:48.359 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/autorun-spdk.conf 00:01:48.359 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.359 ++ SPDK_TEST_NVMF=1 00:01:48.359 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.359 ++ SPDK_TEST_USDT=1 00:01:48.359 ++ SPDK_TEST_NVMF_MDNS=1 00:01:48.359 ++ SPDK_RUN_UBSAN=1 00:01:48.359 ++ NET_TYPE=virt 00:01:48.359 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:48.359 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.359 ++ RUN_NIGHTLY=0 00:01:48.359 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:01:48.359 + nvme_files=() 00:01:48.359 + declare -A nvme_files 00:01:48.359 + backend_dir=/var/lib/libvirt/images/backends 00:01:48.359 + nvme_files['nvme.img']=5G 00:01:48.359 + nvme_files['nvme-cmb.img']=5G 00:01:48.359 + nvme_files['nvme-multi0.img']=4G 00:01:48.359 + nvme_files['nvme-multi1.img']=4G 00:01:48.359 + nvme_files['nvme-multi2.img']=4G 00:01:48.359 + nvme_files['nvme-openstack.img']=8G 00:01:48.359 + nvme_files['nvme-zns.img']=5G 00:01:48.359 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:48.359 + (( SPDK_TEST_FTL == 1 )) 00:01:48.359 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:48.359 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:48.359 + for nvme in "${!nvme_files[@]}" 00:01:48.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:48.359 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.359 + for nvme in "${!nvme_files[@]}" 00:01:48.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:48.359 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.359 + for nvme in "${!nvme_files[@]}" 00:01:48.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:48.617 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:48.617 + for nvme in "${!nvme_files[@]}" 00:01:48.617 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:48.617 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.617 + for nvme in "${!nvme_files[@]}" 00:01:48.617 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:48.617 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.617 + for nvme in "${!nvme_files[@]}" 00:01:48.617 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:48.875 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.875 + for nvme in "${!nvme_files[@]}" 00:01:48.875 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:49.462 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:49.462 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:49.462 + echo 'End stage prepare_nvme.sh' 00:01:49.463 End stage prepare_nvme.sh 00:01:49.507 [Pipeline] sh 00:01:49.788 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:49.788 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:49.788 00:01:49.788 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk/scripts/vagrant 00:01:49.788 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk 00:01:49.788 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:01:49.788 HELP=0 00:01:49.788 DRY_RUN=0 00:01:49.788 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:49.788 NVME_DISKS_TYPE=nvme,nvme, 00:01:49.788 NVME_AUTO_CREATE=0 00:01:49.788 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:49.788 NVME_CMB=,, 00:01:49.788 NVME_PMR=,, 00:01:49.788 NVME_ZNS=,, 00:01:49.788 NVME_MS=,, 00:01:49.788 NVME_FDP=,, 00:01:49.788 SPDK_VAGRANT_DISTRO=fedora39 00:01:49.788 SPDK_VAGRANT_VMCPU=10 00:01:49.788 SPDK_VAGRANT_VMRAM=12288 00:01:49.788 SPDK_VAGRANT_PROVIDER=libvirt 00:01:49.788 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:49.788 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:49.788 SPDK_OPENSTACK_NETWORK=0 00:01:49.788 VAGRANT_PACKAGE_BOX=0 00:01:49.788 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk/scripts/vagrant/Vagrantfile 00:01:49.788 FORCE_DISTRO=true 00:01:49.788 VAGRANT_BOX_VERSION= 00:01:49.788 EXTRA_VAGRANTFILES= 00:01:49.788 NIC_MODEL=e1000 00:01:49.788 00:01:49.788 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt' 00:01:49.788 /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:01:53.070 Bringing machine 'default' up with 'libvirt' provider... 00:01:53.636 ==> default: Creating image (snapshot of base box volume). 00:01:53.636 ==> default: Creating domain with the following settings... 00:01:53.636 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727795692_7d745146e840ec8f7b61 00:01:53.636 ==> default: -- Domain type: kvm 00:01:53.636 ==> default: -- Cpus: 10 00:01:53.636 ==> default: -- Feature: acpi 00:01:53.636 ==> default: -- Feature: apic 00:01:53.636 ==> default: -- Feature: pae 00:01:53.636 ==> default: -- Memory: 12288M 00:01:53.636 ==> default: -- Memory Backing: hugepages: 00:01:53.636 ==> default: -- Management MAC: 00:01:53.636 ==> default: -- Loader: 00:01:53.636 ==> default: -- Nvram: 00:01:53.636 ==> default: -- Base box: spdk/fedora39 00:01:53.636 ==> default: -- Storage pool: default 00:01:53.636 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727795692_7d745146e840ec8f7b61.img (20G) 00:01:53.636 ==> default: -- Volume Cache: default 00:01:53.636 ==> default: -- Kernel: 00:01:53.636 ==> default: -- Initrd: 00:01:53.636 ==> default: -- Graphics Type: vnc 00:01:53.636 ==> default: -- Graphics Port: -1 00:01:53.636 ==> default: -- Graphics IP: 127.0.0.1 00:01:53.636 ==> default: -- Graphics Password: Not defined 00:01:53.636 ==> default: -- Video Type: cirrus 00:01:53.636 ==> default: -- Video VRAM: 9216 00:01:53.636 ==> default: -- Sound Type: 00:01:53.636 ==> default: -- Keymap: en-us 00:01:53.636 ==> default: -- TPM Path: 00:01:53.636 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:53.636 ==> default: -- Command line args: 00:01:53.636 ==> default: -> value=-device, 00:01:53.636 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:53.636 ==> default: -> value=-drive, 00:01:53.636 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:53.636 ==> default: -> value=-device, 00:01:53.636 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.636 ==> default: -> value=-device, 00:01:53.636 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:53.636 ==> default: -> value=-drive, 00:01:53.636 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:53.636 ==> default: -> value=-device, 00:01:53.636 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.636 ==> default: -> value=-drive, 00:01:53.636 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:53.636 ==> default: -> value=-device, 00:01:53.636 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.636 ==> default: -> value=-drive, 00:01:53.636 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:53.636 ==> default: -> value=-device, 00:01:53.636 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.894 ==> default: Creating shared folders metadata... 00:01:53.894 ==> default: Starting domain. 00:01:55.269 ==> default: Waiting for domain to get an IP address... 00:02:13.350 ==> default: Waiting for SSH to become available... 00:02:13.350 ==> default: Configuring and enabling network interfaces... 00:02:15.881 default: SSH address: 192.168.121.88:22 00:02:15.881 default: SSH username: vagrant 00:02:15.881 default: SSH auth method: private key 00:02:17.830 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:25.933 ==> default: Mounting SSHFS shared folder... 00:02:27.305 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:27.305 ==> default: Checking Mount.. 00:02:28.677 ==> default: Folder Successfully Mounted! 00:02:28.677 ==> default: Running provisioner: file... 00:02:29.243 default: ~/.gitconfig => .gitconfig 00:02:29.810 00:02:29.810 SUCCESS! 00:02:29.810 00:02:29.810 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt and type "vagrant ssh" to use. 00:02:29.810 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:29.810 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt" to destroy all trace of vm. 00:02:29.810 00:02:29.818 [Pipeline] } 00:02:29.833 [Pipeline] // stage 00:02:29.844 [Pipeline] dir 00:02:29.844 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt 00:02:29.845 [Pipeline] { 00:02:29.857 [Pipeline] catchError 00:02:29.858 [Pipeline] { 00:02:29.869 [Pipeline] sh 00:02:30.147 + vagrant ssh-config --host vagrant 00:02:30.147 + sed -ne /^Host/,$p 00:02:30.147 + tee ssh_conf 00:02:34.359 Host vagrant 00:02:34.359 HostName 192.168.121.88 00:02:34.359 User vagrant 00:02:34.359 Port 22 00:02:34.359 UserKnownHostsFile /dev/null 00:02:34.359 StrictHostKeyChecking no 00:02:34.359 PasswordAuthentication no 00:02:34.359 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:34.359 IdentitiesOnly yes 00:02:34.359 LogLevel FATAL 00:02:34.359 ForwardAgent yes 00:02:34.359 ForwardX11 yes 00:02:34.359 00:02:34.373 [Pipeline] withEnv 00:02:34.375 [Pipeline] { 00:02:34.389 [Pipeline] sh 00:02:34.667 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:34.667 source /etc/os-release 00:02:34.667 [[ -e /image.version ]] && img=$(< /image.version) 00:02:34.667 # Minimal, systemd-like check. 00:02:34.667 if [[ -e /.dockerenv ]]; then 00:02:34.667 # Clear garbage from the node's name: 00:02:34.667 # agt-er_autotest_547-896 -> autotest_547-896 00:02:34.667 # $HOSTNAME is the actual container id 00:02:34.667 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:34.667 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:34.667 # We can assume this is a mount from a host where container is running, 00:02:34.667 # so fetch its hostname to easily identify the target swarm worker. 00:02:34.667 container="$(< /etc/hostname) ($agent)" 00:02:34.667 else 00:02:34.667 # Fallback 00:02:34.667 container=$agent 00:02:34.667 fi 00:02:34.667 fi 00:02:34.667 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:34.667 00:02:34.677 [Pipeline] } 00:02:34.693 [Pipeline] // withEnv 00:02:34.700 [Pipeline] setCustomBuildProperty 00:02:34.741 [Pipeline] stage 00:02:34.744 [Pipeline] { (Tests) 00:02:34.762 [Pipeline] sh 00:02:35.047 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:35.059 [Pipeline] sh 00:02:35.337 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:35.607 [Pipeline] timeout 00:02:35.608 Timeout set to expire in 1 hr 0 min 00:02:35.609 [Pipeline] { 00:02:35.622 [Pipeline] sh 00:02:35.901 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:36.466 HEAD is now at f15f2a1dd bdev/nvme: controller failover/multipath doc change 00:02:36.476 [Pipeline] sh 00:02:36.752 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:37.021 [Pipeline] sh 00:02:37.296 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:37.568 [Pipeline] sh 00:02:37.844 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:37.844 ++ readlink -f spdk_repo 00:02:37.844 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:37.844 + [[ -n /home/vagrant/spdk_repo ]] 00:02:37.844 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:37.844 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:37.844 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:37.844 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:37.844 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:37.844 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:37.844 + cd /home/vagrant/spdk_repo 00:02:37.844 + source /etc/os-release 00:02:37.844 ++ NAME='Fedora Linux' 00:02:37.844 ++ VERSION='39 (Cloud Edition)' 00:02:37.844 ++ ID=fedora 00:02:37.844 ++ VERSION_ID=39 00:02:37.844 ++ VERSION_CODENAME= 00:02:37.844 ++ PLATFORM_ID=platform:f39 00:02:37.844 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:37.844 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:37.844 ++ LOGO=fedora-logo-icon 00:02:37.844 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:37.844 ++ HOME_URL=https://fedoraproject.org/ 00:02:37.844 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:37.844 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:37.844 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:37.844 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:37.844 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:37.844 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:37.844 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:37.844 ++ SUPPORT_END=2024-11-12 00:02:37.844 ++ VARIANT='Cloud Edition' 00:02:37.844 ++ VARIANT_ID=cloud 00:02:37.844 + uname -a 00:02:37.844 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:37.844 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:38.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:38.411 Hugepages 00:02:38.411 node hugesize free / total 00:02:38.411 node0 1048576kB 0 / 0 00:02:38.411 node0 2048kB 0 / 0 00:02:38.411 00:02:38.411 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.411 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:38.411 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:38.411 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:38.411 + rm -f /tmp/spdk-ld-path 00:02:38.411 + source autorun-spdk.conf 00:02:38.411 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.411 ++ SPDK_TEST_NVMF=1 00:02:38.411 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:38.411 ++ SPDK_TEST_USDT=1 00:02:38.411 ++ SPDK_TEST_NVMF_MDNS=1 00:02:38.411 ++ SPDK_RUN_UBSAN=1 00:02:38.411 ++ NET_TYPE=virt 00:02:38.411 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:38.411 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.411 ++ RUN_NIGHTLY=0 00:02:38.411 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:38.411 + [[ -n '' ]] 00:02:38.411 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:38.411 + for M in /var/spdk/build-*-manifest.txt 00:02:38.411 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:38.411 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.411 + for M in /var/spdk/build-*-manifest.txt 00:02:38.411 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:38.411 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.411 + for M in /var/spdk/build-*-manifest.txt 00:02:38.411 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:38.411 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.411 ++ uname 00:02:38.411 + [[ Linux == \L\i\n\u\x ]] 00:02:38.411 + sudo dmesg -T 00:02:38.411 + sudo dmesg --clear 00:02:38.669 + dmesg_pid=5251 00:02:38.669 + [[ Fedora Linux == FreeBSD ]] 00:02:38.669 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:38.669 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:38.669 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:38.669 + sudo dmesg -Tw 00:02:38.669 + [[ -x /usr/src/fio-static/fio ]] 00:02:38.669 + export FIO_BIN=/usr/src/fio-static/fio 00:02:38.669 + FIO_BIN=/usr/src/fio-static/fio 00:02:38.669 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:38.669 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:38.669 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:38.669 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:38.669 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:38.669 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:38.669 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:38.669 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:38.669 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.669 Test configuration: 00:02:38.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.669 SPDK_TEST_NVMF=1 00:02:38.669 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:38.669 SPDK_TEST_USDT=1 00:02:38.669 SPDK_TEST_NVMF_MDNS=1 00:02:38.669 SPDK_RUN_UBSAN=1 00:02:38.669 NET_TYPE=virt 00:02:38.669 SPDK_JSONRPC_GO_CLIENT=1 00:02:38.669 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.669 RUN_NIGHTLY=0 15:15:37 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:38.669 15:15:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:38.669 15:15:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:38.669 15:15:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:38.669 15:15:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.669 15:15:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.669 15:15:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.669 15:15:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.669 15:15:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.669 15:15:37 -- paths/export.sh@5 -- $ export PATH 00:02:38.669 15:15:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.669 15:15:37 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:38.669 15:15:37 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:38.669 15:15:37 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727795737.XXXXXX 00:02:38.669 15:15:37 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727795737.RyaqSy 00:02:38.669 15:15:37 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:38.669 15:15:37 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:38.669 15:15:37 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:38.669 15:15:37 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:38.669 15:15:37 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:38.669 15:15:37 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:38.669 15:15:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:38.669 15:15:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.669 15:15:37 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:38.669 15:15:37 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:38.669 15:15:37 -- pm/common@17 -- $ local monitor 00:02:38.669 15:15:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.669 15:15:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.669 15:15:37 -- pm/common@25 -- $ sleep 1 00:02:38.670 15:15:37 -- pm/common@21 -- $ date +%s 00:02:38.670 15:15:37 -- pm/common@21 -- $ date +%s 00:02:38.670 15:15:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727795737 00:02:38.670 15:15:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727795737 00:02:38.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727795737_collect-vmstat.pm.log 00:02:38.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727795737_collect-cpu-load.pm.log 00:02:39.604 15:15:38 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:39.604 15:15:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:39.604 15:15:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:39.604 15:15:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:39.604 15:15:38 -- spdk/autobuild.sh@16 -- $ date -u 00:02:39.604 Tue Oct 1 03:15:38 PM UTC 2024 00:02:39.604 15:15:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:39.604 v25.01-pre-20-gf15f2a1dd 00:02:39.604 15:15:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:39.604 15:15:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:39.604 15:15:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:39.604 15:15:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:39.604 15:15:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:39.604 15:15:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.604 ************************************ 00:02:39.604 START TEST ubsan 00:02:39.604 ************************************ 00:02:39.604 using ubsan 00:02:39.604 15:15:38 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:39.604 00:02:39.604 real 0m0.000s 00:02:39.604 user 0m0.000s 00:02:39.604 sys 0m0.000s 00:02:39.604 15:15:38 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:39.604 15:15:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:39.604 ************************************ 00:02:39.604 END TEST ubsan 00:02:39.604 ************************************ 00:02:39.604 15:15:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:39.604 15:15:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:39.604 15:15:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:39.604 15:15:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:39.604 15:15:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:39.604 15:15:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:39.604 15:15:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:39.604 15:15:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:39.604 15:15:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:39.863 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:39.863 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:40.429 Using 'verbs' RDMA provider 00:02:53.193 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:05.386 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:05.386 go version go1.21.1 linux/amd64 00:03:05.386 Creating mk/config.mk...done. 00:03:05.386 Creating mk/cc.flags.mk...done. 00:03:05.386 Type 'make' to build. 00:03:05.386 15:16:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:05.386 15:16:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:05.386 15:16:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:05.386 15:16:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.386 ************************************ 00:03:05.386 START TEST make 00:03:05.386 ************************************ 00:03:05.386 15:16:03 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:05.386 make[1]: Nothing to be done for 'all'. 00:03:23.466 The Meson build system 00:03:23.466 Version: 1.5.0 00:03:23.466 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:23.466 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:23.466 Build type: native build 00:03:23.466 Program cat found: YES (/usr/bin/cat) 00:03:23.466 Project name: DPDK 00:03:23.466 Project version: 24.03.0 00:03:23.466 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:23.466 C linker for the host machine: cc ld.bfd 2.40-14 00:03:23.466 Host machine cpu family: x86_64 00:03:23.466 Host machine cpu: x86_64 00:03:23.466 Message: ## Building in Developer Mode ## 00:03:23.466 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:23.466 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:23.466 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:23.466 Program python3 found: YES (/usr/bin/python3) 00:03:23.466 Program cat found: YES (/usr/bin/cat) 00:03:23.466 Compiler for C supports arguments -march=native: YES 00:03:23.466 Checking for size of "void *" : 8 00:03:23.466 Checking for size of "void *" : 8 (cached) 00:03:23.466 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:23.466 Library m found: YES 00:03:23.466 Library numa found: YES 00:03:23.466 Has header "numaif.h" : YES 00:03:23.466 Library fdt found: NO 00:03:23.466 Library execinfo found: NO 00:03:23.466 Has header "execinfo.h" : YES 00:03:23.466 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:23.466 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:23.466 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:23.466 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:23.466 Run-time dependency openssl found: YES 3.1.1 00:03:23.466 Run-time dependency libpcap found: YES 1.10.4 00:03:23.466 Has header "pcap.h" with dependency libpcap: YES 00:03:23.466 Compiler for C supports arguments -Wcast-qual: YES 00:03:23.466 Compiler for C supports arguments -Wdeprecated: YES 00:03:23.466 Compiler for C supports arguments -Wformat: YES 00:03:23.466 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:23.466 Compiler for C supports arguments -Wformat-security: NO 00:03:23.466 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:23.466 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:23.466 Compiler for C supports arguments -Wnested-externs: YES 00:03:23.466 Compiler for C supports arguments -Wold-style-definition: YES 00:03:23.466 Compiler for C supports arguments -Wpointer-arith: YES 00:03:23.466 Compiler for C supports arguments -Wsign-compare: YES 00:03:23.466 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:23.466 Compiler for C supports arguments -Wundef: YES 00:03:23.466 Compiler for C supports arguments -Wwrite-strings: YES 00:03:23.466 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:23.466 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:23.466 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:23.466 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:23.466 Program objdump found: YES (/usr/bin/objdump) 00:03:23.466 Compiler for C supports arguments -mavx512f: YES 00:03:23.466 Checking if "AVX512 checking" compiles: YES 00:03:23.466 Fetching value of define "__SSE4_2__" : 1 00:03:23.466 Fetching value of define "__AES__" : 1 00:03:23.466 Fetching value of define "__AVX__" : 1 00:03:23.466 Fetching value of define "__AVX2__" : 1 00:03:23.466 Fetching value of define "__AVX512BW__" : (undefined) 00:03:23.466 Fetching value of define "__AVX512CD__" : (undefined) 00:03:23.466 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:23.466 Fetching value of define "__AVX512F__" : (undefined) 00:03:23.466 Fetching value of define "__AVX512VL__" : (undefined) 00:03:23.466 Fetching value of define "__PCLMUL__" : 1 00:03:23.466 Fetching value of define "__RDRND__" : 1 00:03:23.466 Fetching value of define "__RDSEED__" : 1 00:03:23.466 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:23.466 Fetching value of define "__znver1__" : (undefined) 00:03:23.466 Fetching value of define "__znver2__" : (undefined) 00:03:23.466 Fetching value of define "__znver3__" : (undefined) 00:03:23.466 Fetching value of define "__znver4__" : (undefined) 00:03:23.466 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:23.466 Message: lib/log: Defining dependency "log" 00:03:23.466 Message: lib/kvargs: Defining dependency "kvargs" 00:03:23.466 Message: lib/telemetry: Defining dependency "telemetry" 00:03:23.466 Checking for function "getentropy" : NO 00:03:23.466 Message: lib/eal: Defining dependency "eal" 00:03:23.466 Message: lib/ring: Defining dependency "ring" 00:03:23.466 Message: lib/rcu: Defining dependency "rcu" 00:03:23.466 Message: lib/mempool: Defining dependency "mempool" 00:03:23.466 Message: lib/mbuf: Defining dependency "mbuf" 00:03:23.466 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:23.466 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:23.466 Compiler for C supports arguments -mpclmul: YES 00:03:23.466 Compiler for C supports arguments -maes: YES 00:03:23.466 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:23.466 Compiler for C supports arguments -mavx512bw: YES 00:03:23.466 Compiler for C supports arguments -mavx512dq: YES 00:03:23.466 Compiler for C supports arguments -mavx512vl: YES 00:03:23.466 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:23.466 Compiler for C supports arguments -mavx2: YES 00:03:23.466 Compiler for C supports arguments -mavx: YES 00:03:23.466 Message: lib/net: Defining dependency "net" 00:03:23.466 Message: lib/meter: Defining dependency "meter" 00:03:23.466 Message: lib/ethdev: Defining dependency "ethdev" 00:03:23.466 Message: lib/pci: Defining dependency "pci" 00:03:23.466 Message: lib/cmdline: Defining dependency "cmdline" 00:03:23.466 Message: lib/hash: Defining dependency "hash" 00:03:23.466 Message: lib/timer: Defining dependency "timer" 00:03:23.466 Message: lib/compressdev: Defining dependency "compressdev" 00:03:23.466 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:23.466 Message: lib/dmadev: Defining dependency "dmadev" 00:03:23.466 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:23.466 Message: lib/power: Defining dependency "power" 00:03:23.466 Message: lib/reorder: Defining dependency "reorder" 00:03:23.466 Message: lib/security: Defining dependency "security" 00:03:23.466 Has header "linux/userfaultfd.h" : YES 00:03:23.466 Has header "linux/vduse.h" : YES 00:03:23.466 Message: lib/vhost: Defining dependency "vhost" 00:03:23.466 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:23.466 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:23.466 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:23.466 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:23.466 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:23.466 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:23.466 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:23.466 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:23.466 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:23.466 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:23.466 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:23.466 Configuring doxy-api-html.conf using configuration 00:03:23.466 Configuring doxy-api-man.conf using configuration 00:03:23.466 Program mandb found: YES (/usr/bin/mandb) 00:03:23.466 Program sphinx-build found: NO 00:03:23.466 Configuring rte_build_config.h using configuration 00:03:23.466 Message: 00:03:23.466 ================= 00:03:23.466 Applications Enabled 00:03:23.466 ================= 00:03:23.466 00:03:23.466 apps: 00:03:23.467 00:03:23.467 00:03:23.467 Message: 00:03:23.467 ================= 00:03:23.467 Libraries Enabled 00:03:23.467 ================= 00:03:23.467 00:03:23.467 libs: 00:03:23.467 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:23.467 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:23.467 cryptodev, dmadev, power, reorder, security, vhost, 00:03:23.467 00:03:23.467 Message: 00:03:23.467 =============== 00:03:23.467 Drivers Enabled 00:03:23.467 =============== 00:03:23.467 00:03:23.467 common: 00:03:23.467 00:03:23.467 bus: 00:03:23.467 pci, vdev, 00:03:23.467 mempool: 00:03:23.467 ring, 00:03:23.467 dma: 00:03:23.467 00:03:23.467 net: 00:03:23.467 00:03:23.467 crypto: 00:03:23.467 00:03:23.467 compress: 00:03:23.467 00:03:23.467 vdpa: 00:03:23.467 00:03:23.467 00:03:23.467 Message: 00:03:23.467 ================= 00:03:23.467 Content Skipped 00:03:23.467 ================= 00:03:23.467 00:03:23.467 apps: 00:03:23.467 dumpcap: explicitly disabled via build config 00:03:23.467 graph: explicitly disabled via build config 00:03:23.467 pdump: explicitly disabled via build config 00:03:23.467 proc-info: explicitly disabled via build config 00:03:23.467 test-acl: explicitly disabled via build config 00:03:23.467 test-bbdev: explicitly disabled via build config 00:03:23.467 test-cmdline: explicitly disabled via build config 00:03:23.467 test-compress-perf: explicitly disabled via build config 00:03:23.467 test-crypto-perf: explicitly disabled via build config 00:03:23.467 test-dma-perf: explicitly disabled via build config 00:03:23.467 test-eventdev: explicitly disabled via build config 00:03:23.467 test-fib: explicitly disabled via build config 00:03:23.467 test-flow-perf: explicitly disabled via build config 00:03:23.467 test-gpudev: explicitly disabled via build config 00:03:23.467 test-mldev: explicitly disabled via build config 00:03:23.467 test-pipeline: explicitly disabled via build config 00:03:23.467 test-pmd: explicitly disabled via build config 00:03:23.467 test-regex: explicitly disabled via build config 00:03:23.467 test-sad: explicitly disabled via build config 00:03:23.467 test-security-perf: explicitly disabled via build config 00:03:23.467 00:03:23.467 libs: 00:03:23.467 argparse: explicitly disabled via build config 00:03:23.467 metrics: explicitly disabled via build config 00:03:23.467 acl: explicitly disabled via build config 00:03:23.467 bbdev: explicitly disabled via build config 00:03:23.467 bitratestats: explicitly disabled via build config 00:03:23.467 bpf: explicitly disabled via build config 00:03:23.467 cfgfile: explicitly disabled via build config 00:03:23.467 distributor: explicitly disabled via build config 00:03:23.467 efd: explicitly disabled via build config 00:03:23.467 eventdev: explicitly disabled via build config 00:03:23.467 dispatcher: explicitly disabled via build config 00:03:23.467 gpudev: explicitly disabled via build config 00:03:23.467 gro: explicitly disabled via build config 00:03:23.467 gso: explicitly disabled via build config 00:03:23.467 ip_frag: explicitly disabled via build config 00:03:23.467 jobstats: explicitly disabled via build config 00:03:23.467 latencystats: explicitly disabled via build config 00:03:23.467 lpm: explicitly disabled via build config 00:03:23.467 member: explicitly disabled via build config 00:03:23.467 pcapng: explicitly disabled via build config 00:03:23.467 rawdev: explicitly disabled via build config 00:03:23.467 regexdev: explicitly disabled via build config 00:03:23.467 mldev: explicitly disabled via build config 00:03:23.467 rib: explicitly disabled via build config 00:03:23.467 sched: explicitly disabled via build config 00:03:23.467 stack: explicitly disabled via build config 00:03:23.467 ipsec: explicitly disabled via build config 00:03:23.467 pdcp: explicitly disabled via build config 00:03:23.467 fib: explicitly disabled via build config 00:03:23.467 port: explicitly disabled via build config 00:03:23.467 pdump: explicitly disabled via build config 00:03:23.467 table: explicitly disabled via build config 00:03:23.467 pipeline: explicitly disabled via build config 00:03:23.467 graph: explicitly disabled via build config 00:03:23.467 node: explicitly disabled via build config 00:03:23.467 00:03:23.467 drivers: 00:03:23.467 common/cpt: not in enabled drivers build config 00:03:23.467 common/dpaax: not in enabled drivers build config 00:03:23.467 common/iavf: not in enabled drivers build config 00:03:23.467 common/idpf: not in enabled drivers build config 00:03:23.467 common/ionic: not in enabled drivers build config 00:03:23.467 common/mvep: not in enabled drivers build config 00:03:23.467 common/octeontx: not in enabled drivers build config 00:03:23.467 bus/auxiliary: not in enabled drivers build config 00:03:23.467 bus/cdx: not in enabled drivers build config 00:03:23.467 bus/dpaa: not in enabled drivers build config 00:03:23.467 bus/fslmc: not in enabled drivers build config 00:03:23.467 bus/ifpga: not in enabled drivers build config 00:03:23.467 bus/platform: not in enabled drivers build config 00:03:23.467 bus/uacce: not in enabled drivers build config 00:03:23.467 bus/vmbus: not in enabled drivers build config 00:03:23.467 common/cnxk: not in enabled drivers build config 00:03:23.467 common/mlx5: not in enabled drivers build config 00:03:23.467 common/nfp: not in enabled drivers build config 00:03:23.467 common/nitrox: not in enabled drivers build config 00:03:23.467 common/qat: not in enabled drivers build config 00:03:23.467 common/sfc_efx: not in enabled drivers build config 00:03:23.467 mempool/bucket: not in enabled drivers build config 00:03:23.467 mempool/cnxk: not in enabled drivers build config 00:03:23.467 mempool/dpaa: not in enabled drivers build config 00:03:23.467 mempool/dpaa2: not in enabled drivers build config 00:03:23.467 mempool/octeontx: not in enabled drivers build config 00:03:23.467 mempool/stack: not in enabled drivers build config 00:03:23.467 dma/cnxk: not in enabled drivers build config 00:03:23.467 dma/dpaa: not in enabled drivers build config 00:03:23.467 dma/dpaa2: not in enabled drivers build config 00:03:23.467 dma/hisilicon: not in enabled drivers build config 00:03:23.467 dma/idxd: not in enabled drivers build config 00:03:23.467 dma/ioat: not in enabled drivers build config 00:03:23.467 dma/skeleton: not in enabled drivers build config 00:03:23.467 net/af_packet: not in enabled drivers build config 00:03:23.467 net/af_xdp: not in enabled drivers build config 00:03:23.467 net/ark: not in enabled drivers build config 00:03:23.467 net/atlantic: not in enabled drivers build config 00:03:23.467 net/avp: not in enabled drivers build config 00:03:23.467 net/axgbe: not in enabled drivers build config 00:03:23.467 net/bnx2x: not in enabled drivers build config 00:03:23.467 net/bnxt: not in enabled drivers build config 00:03:23.467 net/bonding: not in enabled drivers build config 00:03:23.467 net/cnxk: not in enabled drivers build config 00:03:23.467 net/cpfl: not in enabled drivers build config 00:03:23.467 net/cxgbe: not in enabled drivers build config 00:03:23.467 net/dpaa: not in enabled drivers build config 00:03:23.467 net/dpaa2: not in enabled drivers build config 00:03:23.467 net/e1000: not in enabled drivers build config 00:03:23.467 net/ena: not in enabled drivers build config 00:03:23.467 net/enetc: not in enabled drivers build config 00:03:23.467 net/enetfec: not in enabled drivers build config 00:03:23.467 net/enic: not in enabled drivers build config 00:03:23.467 net/failsafe: not in enabled drivers build config 00:03:23.467 net/fm10k: not in enabled drivers build config 00:03:23.467 net/gve: not in enabled drivers build config 00:03:23.467 net/hinic: not in enabled drivers build config 00:03:23.467 net/hns3: not in enabled drivers build config 00:03:23.467 net/i40e: not in enabled drivers build config 00:03:23.467 net/iavf: not in enabled drivers build config 00:03:23.467 net/ice: not in enabled drivers build config 00:03:23.467 net/idpf: not in enabled drivers build config 00:03:23.467 net/igc: not in enabled drivers build config 00:03:23.467 net/ionic: not in enabled drivers build config 00:03:23.467 net/ipn3ke: not in enabled drivers build config 00:03:23.467 net/ixgbe: not in enabled drivers build config 00:03:23.467 net/mana: not in enabled drivers build config 00:03:23.467 net/memif: not in enabled drivers build config 00:03:23.467 net/mlx4: not in enabled drivers build config 00:03:23.467 net/mlx5: not in enabled drivers build config 00:03:23.467 net/mvneta: not in enabled drivers build config 00:03:23.467 net/mvpp2: not in enabled drivers build config 00:03:23.467 net/netvsc: not in enabled drivers build config 00:03:23.467 net/nfb: not in enabled drivers build config 00:03:23.467 net/nfp: not in enabled drivers build config 00:03:23.467 net/ngbe: not in enabled drivers build config 00:03:23.467 net/null: not in enabled drivers build config 00:03:23.467 net/octeontx: not in enabled drivers build config 00:03:23.467 net/octeon_ep: not in enabled drivers build config 00:03:23.467 net/pcap: not in enabled drivers build config 00:03:23.467 net/pfe: not in enabled drivers build config 00:03:23.467 net/qede: not in enabled drivers build config 00:03:23.467 net/ring: not in enabled drivers build config 00:03:23.467 net/sfc: not in enabled drivers build config 00:03:23.467 net/softnic: not in enabled drivers build config 00:03:23.467 net/tap: not in enabled drivers build config 00:03:23.467 net/thunderx: not in enabled drivers build config 00:03:23.467 net/txgbe: not in enabled drivers build config 00:03:23.467 net/vdev_netvsc: not in enabled drivers build config 00:03:23.467 net/vhost: not in enabled drivers build config 00:03:23.467 net/virtio: not in enabled drivers build config 00:03:23.467 net/vmxnet3: not in enabled drivers build config 00:03:23.467 raw/*: missing internal dependency, "rawdev" 00:03:23.467 crypto/armv8: not in enabled drivers build config 00:03:23.467 crypto/bcmfs: not in enabled drivers build config 00:03:23.467 crypto/caam_jr: not in enabled drivers build config 00:03:23.467 crypto/ccp: not in enabled drivers build config 00:03:23.467 crypto/cnxk: not in enabled drivers build config 00:03:23.467 crypto/dpaa_sec: not in enabled drivers build config 00:03:23.467 crypto/dpaa2_sec: not in enabled drivers build config 00:03:23.467 crypto/ipsec_mb: not in enabled drivers build config 00:03:23.467 crypto/mlx5: not in enabled drivers build config 00:03:23.467 crypto/mvsam: not in enabled drivers build config 00:03:23.467 crypto/nitrox: not in enabled drivers build config 00:03:23.467 crypto/null: not in enabled drivers build config 00:03:23.468 crypto/octeontx: not in enabled drivers build config 00:03:23.468 crypto/openssl: not in enabled drivers build config 00:03:23.468 crypto/scheduler: not in enabled drivers build config 00:03:23.468 crypto/uadk: not in enabled drivers build config 00:03:23.468 crypto/virtio: not in enabled drivers build config 00:03:23.468 compress/isal: not in enabled drivers build config 00:03:23.468 compress/mlx5: not in enabled drivers build config 00:03:23.468 compress/nitrox: not in enabled drivers build config 00:03:23.468 compress/octeontx: not in enabled drivers build config 00:03:23.468 compress/zlib: not in enabled drivers build config 00:03:23.468 regex/*: missing internal dependency, "regexdev" 00:03:23.468 ml/*: missing internal dependency, "mldev" 00:03:23.468 vdpa/ifc: not in enabled drivers build config 00:03:23.468 vdpa/mlx5: not in enabled drivers build config 00:03:23.468 vdpa/nfp: not in enabled drivers build config 00:03:23.468 vdpa/sfc: not in enabled drivers build config 00:03:23.468 event/*: missing internal dependency, "eventdev" 00:03:23.468 baseband/*: missing internal dependency, "bbdev" 00:03:23.468 gpu/*: missing internal dependency, "gpudev" 00:03:23.468 00:03:23.468 00:03:24.033 Build targets in project: 85 00:03:24.033 00:03:24.033 DPDK 24.03.0 00:03:24.033 00:03:24.033 User defined options 00:03:24.033 buildtype : debug 00:03:24.034 default_library : shared 00:03:24.034 libdir : lib 00:03:24.034 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:24.034 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:24.034 c_link_args : 00:03:24.034 cpu_instruction_set: native 00:03:24.034 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:24.034 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:24.034 enable_docs : false 00:03:24.034 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:24.034 enable_kmods : false 00:03:24.034 max_lcores : 128 00:03:24.034 tests : false 00:03:24.034 00:03:24.034 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:24.968 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:24.968 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:24.968 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:24.968 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:24.968 [4/268] Linking static target lib/librte_log.a 00:03:25.226 [5/268] Linking static target lib/librte_kvargs.a 00:03:25.226 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:25.832 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:25.832 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:25.832 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.832 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:26.090 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:26.090 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:26.090 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.348 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:26.348 [15/268] Linking target lib/librte_log.so.24.1 00:03:26.606 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:26.606 [17/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:26.606 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:26.606 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:26.606 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:26.606 [21/268] Linking target lib/librte_kvargs.so.24.1 00:03:26.606 [22/268] Linking static target lib/librte_telemetry.a 00:03:27.172 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:27.172 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:27.172 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:27.172 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:27.430 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:27.430 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:27.430 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:27.689 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:27.689 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:27.689 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:27.689 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:27.689 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.947 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:27.947 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:27.947 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:28.205 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:28.463 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:28.721 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:28.721 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:28.721 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:28.721 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:28.721 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:28.721 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:28.980 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:28.980 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:28.980 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:28.980 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:29.238 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.497 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:29.497 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:29.755 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:29.755 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:30.012 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:30.012 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:30.012 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:30.270 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:30.270 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:30.270 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:30.270 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:30.270 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:30.527 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:30.527 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:30.527 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:30.785 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:31.041 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:31.041 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:31.298 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:31.298 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:31.298 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:31.298 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:31.298 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:31.555 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:31.555 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:31.555 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:31.555 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:31.813 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:31.813 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:31.813 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:32.071 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:32.071 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:32.071 [83/268] Linking static target lib/librte_ring.a 00:03:32.329 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:32.329 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:32.587 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:32.587 [87/268] Linking static target lib/librte_eal.a 00:03:32.844 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:32.844 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.844 [90/268] Linking static target lib/librte_rcu.a 00:03:32.844 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:33.101 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:33.101 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:33.101 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:33.358 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:33.358 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:33.358 [97/268] Linking static target lib/librte_mempool.a 00:03:33.617 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:33.617 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.617 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:33.875 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:34.133 [102/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:34.133 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:34.133 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:34.390 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:34.390 [106/268] Linking static target lib/librte_mbuf.a 00:03:34.648 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:34.648 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:34.648 [109/268] Linking static target lib/librte_meter.a 00:03:34.905 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:34.905 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:34.905 [112/268] Linking static target lib/librte_net.a 00:03:34.905 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.163 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:35.163 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:35.421 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.421 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.678 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.243 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:36.501 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:36.501 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:36.501 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:37.065 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:37.323 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:37.323 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:37.323 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:37.323 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:37.323 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:37.323 [129/268] Linking static target lib/librte_pci.a 00:03:37.580 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:37.580 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:37.580 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:37.580 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:37.580 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:37.837 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:37.837 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:37.837 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:37.837 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:37.837 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:37.837 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:37.837 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:38.093 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:38.093 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:38.093 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.093 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:38.350 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:38.350 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:38.350 [148/268] Linking static target lib/librte_ethdev.a 00:03:38.607 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:38.607 [150/268] Linking static target lib/librte_cmdline.a 00:03:38.607 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:39.172 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:39.172 [153/268] Linking static target lib/librte_timer.a 00:03:39.172 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:39.173 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:39.430 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:39.688 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:39.688 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:39.688 [159/268] Linking static target lib/librte_compressdev.a 00:03:39.946 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.204 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:40.204 [162/268] Linking static target lib/librte_hash.a 00:03:40.204 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:40.462 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:40.462 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:40.719 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.719 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:40.977 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:40.977 [169/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.977 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:41.234 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:41.234 [172/268] Linking static target lib/librte_dmadev.a 00:03:41.491 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:41.491 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:41.491 [175/268] Linking static target lib/librte_cryptodev.a 00:03:41.748 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:41.748 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.006 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:42.006 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:42.264 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:42.264 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:42.521 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:42.522 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.522 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:42.780 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:42.780 [186/268] Linking static target lib/librte_security.a 00:03:43.348 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:43.348 [188/268] Linking static target lib/librte_reorder.a 00:03:43.348 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:43.348 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:43.348 [191/268] Linking static target lib/librte_power.a 00:03:43.606 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:43.866 [193/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.127 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:44.127 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.386 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:44.645 [197/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.903 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:45.162 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.420 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:45.420 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:45.420 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:45.679 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:45.679 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:45.679 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:46.245 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:46.245 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:46.245 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:46.503 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:46.503 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:46.503 [211/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:46.503 [212/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:46.503 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:46.503 [214/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:46.761 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:46.761 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:46.761 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:46.761 [218/268] Linking static target drivers/librte_mempool_ring.a 00:03:46.761 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:46.761 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:46.761 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:46.761 [222/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:46.761 [223/268] Linking static target drivers/librte_bus_pci.a 00:03:46.761 [224/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:46.761 [225/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:46.761 [226/268] Linking static target drivers/librte_bus_vdev.a 00:03:47.339 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.597 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.597 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.597 [230/268] Linking target lib/librte_eal.so.24.1 00:03:47.855 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:47.855 [232/268] Linking target lib/librte_ring.so.24.1 00:03:47.855 [233/268] Linking target lib/librte_pci.so.24.1 00:03:47.855 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:47.855 [235/268] Linking target lib/librte_timer.so.24.1 00:03:47.855 [236/268] Linking target lib/librte_meter.so.24.1 00:03:47.855 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:48.113 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:48.113 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:48.113 [240/268] Linking target lib/librte_mempool.so.24.1 00:03:48.113 [241/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:48.114 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:48.114 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:48.114 [244/268] Linking static target lib/librte_vhost.a 00:03:48.114 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:48.114 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:48.114 [247/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:48.373 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:48.373 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:48.373 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:48.373 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:48.680 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:48.680 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:48.680 [254/268] Linking target lib/librte_reorder.so.24.1 00:03:48.680 [255/268] Linking target lib/librte_net.so.24.1 00:03:48.680 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:48.680 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:48.680 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.680 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:48.680 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:48.680 [261/268] Linking target lib/librte_hash.so.24.1 00:03:48.938 [262/268] Linking target lib/librte_security.so.24.1 00:03:48.938 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:48.938 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:48.938 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:49.196 [266/268] Linking target lib/librte_power.so.24.1 00:03:49.455 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.713 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:49.713 INFO: autodetecting backend as ninja 00:03:49.713 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:16.288 CC lib/ut_mock/mock.o 00:04:16.288 CC lib/log/log_flags.o 00:04:16.288 CC lib/ut/ut.o 00:04:16.288 CC lib/log/log.o 00:04:16.288 CC lib/log/log_deprecated.o 00:04:16.288 LIB libspdk_ut.a 00:04:16.288 SO libspdk_ut.so.2.0 00:04:16.288 LIB libspdk_log.a 00:04:16.288 SYMLINK libspdk_ut.so 00:04:16.288 SO libspdk_log.so.7.0 00:04:16.288 LIB libspdk_ut_mock.a 00:04:16.288 SO libspdk_ut_mock.so.6.0 00:04:16.288 SYMLINK libspdk_log.so 00:04:16.288 SYMLINK libspdk_ut_mock.so 00:04:16.545 CC lib/ioat/ioat.o 00:04:16.545 CC lib/util/base64.o 00:04:16.545 CC lib/dma/dma.o 00:04:16.545 CC lib/util/bit_array.o 00:04:16.545 CC lib/util/cpuset.o 00:04:16.545 CC lib/util/crc16.o 00:04:16.545 CC lib/util/crc32c.o 00:04:16.545 CC lib/util/crc32.o 00:04:16.545 CXX lib/trace_parser/trace.o 00:04:16.545 CC lib/vfio_user/host/vfio_user_pci.o 00:04:16.803 CC lib/util/crc32_ieee.o 00:04:16.803 CC lib/util/crc64.o 00:04:16.803 CC lib/util/dif.o 00:04:16.803 CC lib/util/fd.o 00:04:16.803 LIB libspdk_dma.a 00:04:16.803 SO libspdk_dma.so.5.0 00:04:16.803 CC lib/vfio_user/host/vfio_user.o 00:04:16.803 CC lib/util/fd_group.o 00:04:17.060 CC lib/util/file.o 00:04:17.060 SYMLINK libspdk_dma.so 00:04:17.060 CC lib/util/hexlify.o 00:04:17.060 CC lib/util/iov.o 00:04:17.060 CC lib/util/math.o 00:04:17.060 LIB libspdk_ioat.a 00:04:17.060 SO libspdk_ioat.so.7.0 00:04:17.060 CC lib/util/net.o 00:04:17.060 SYMLINK libspdk_ioat.so 00:04:17.060 CC lib/util/pipe.o 00:04:17.060 CC lib/util/strerror_tls.o 00:04:17.060 LIB libspdk_vfio_user.a 00:04:17.060 CC lib/util/string.o 00:04:17.317 CC lib/util/uuid.o 00:04:17.317 CC lib/util/xor.o 00:04:17.317 SO libspdk_vfio_user.so.5.0 00:04:17.317 CC lib/util/zipf.o 00:04:17.317 SYMLINK libspdk_vfio_user.so 00:04:17.317 CC lib/util/md5.o 00:04:17.574 LIB libspdk_util.a 00:04:17.574 SO libspdk_util.so.10.0 00:04:17.831 SYMLINK libspdk_util.so 00:04:17.831 LIB libspdk_trace_parser.a 00:04:17.831 SO libspdk_trace_parser.so.6.0 00:04:18.088 CC lib/env_dpdk/env.o 00:04:18.088 CC lib/env_dpdk/memory.o 00:04:18.088 CC lib/env_dpdk/pci.o 00:04:18.088 CC lib/rdma_provider/common.o 00:04:18.088 CC lib/idxd/idxd.o 00:04:18.088 CC lib/conf/conf.o 00:04:18.088 CC lib/vmd/vmd.o 00:04:18.088 SYMLINK libspdk_trace_parser.so 00:04:18.088 CC lib/rdma_utils/rdma_utils.o 00:04:18.089 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:18.089 CC lib/json/json_parse.o 00:04:18.347 CC lib/json/json_util.o 00:04:18.347 LIB libspdk_rdma_utils.a 00:04:18.347 SO libspdk_rdma_utils.so.1.0 00:04:18.347 LIB libspdk_rdma_provider.a 00:04:18.347 SYMLINK libspdk_rdma_utils.so 00:04:18.347 SO libspdk_rdma_provider.so.6.0 00:04:18.347 LIB libspdk_conf.a 00:04:18.347 CC lib/vmd/led.o 00:04:18.347 CC lib/env_dpdk/init.o 00:04:18.347 SO libspdk_conf.so.6.0 00:04:18.347 SYMLINK libspdk_rdma_provider.so 00:04:18.604 CC lib/idxd/idxd_user.o 00:04:18.604 CC lib/idxd/idxd_kernel.o 00:04:18.604 SYMLINK libspdk_conf.so 00:04:18.604 CC lib/json/json_write.o 00:04:18.604 CC lib/env_dpdk/threads.o 00:04:18.604 CC lib/env_dpdk/pci_ioat.o 00:04:18.604 CC lib/env_dpdk/pci_virtio.o 00:04:18.604 CC lib/env_dpdk/pci_vmd.o 00:04:18.863 CC lib/env_dpdk/pci_idxd.o 00:04:18.863 CC lib/env_dpdk/pci_event.o 00:04:18.863 LIB libspdk_json.a 00:04:18.863 CC lib/env_dpdk/sigbus_handler.o 00:04:18.863 SO libspdk_json.so.6.0 00:04:18.863 LIB libspdk_idxd.a 00:04:18.863 CC lib/env_dpdk/pci_dpdk.o 00:04:18.863 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:18.863 SYMLINK libspdk_json.so 00:04:18.863 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:18.863 SO libspdk_idxd.so.12.1 00:04:19.121 SYMLINK libspdk_idxd.so 00:04:19.121 LIB libspdk_vmd.a 00:04:19.121 SO libspdk_vmd.so.6.0 00:04:19.121 SYMLINK libspdk_vmd.so 00:04:19.379 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.379 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.379 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.379 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.637 LIB libspdk_jsonrpc.a 00:04:19.637 SO libspdk_jsonrpc.so.6.0 00:04:19.637 SYMLINK libspdk_jsonrpc.so 00:04:19.894 CC lib/rpc/rpc.o 00:04:19.894 LIB libspdk_env_dpdk.a 00:04:20.152 SO libspdk_env_dpdk.so.15.0 00:04:20.152 LIB libspdk_rpc.a 00:04:20.152 SO libspdk_rpc.so.6.0 00:04:20.152 SYMLINK libspdk_env_dpdk.so 00:04:20.152 SYMLINK libspdk_rpc.so 00:04:20.409 CC lib/trace/trace.o 00:04:20.409 CC lib/trace/trace_flags.o 00:04:20.409 CC lib/trace/trace_rpc.o 00:04:20.409 CC lib/keyring/keyring.o 00:04:20.409 CC lib/keyring/keyring_rpc.o 00:04:20.409 CC lib/notify/notify_rpc.o 00:04:20.409 CC lib/notify/notify.o 00:04:20.666 LIB libspdk_notify.a 00:04:20.666 SO libspdk_notify.so.6.0 00:04:20.666 LIB libspdk_trace.a 00:04:20.666 LIB libspdk_keyring.a 00:04:20.666 SYMLINK libspdk_notify.so 00:04:20.666 SO libspdk_trace.so.11.0 00:04:20.922 SO libspdk_keyring.so.2.0 00:04:20.922 SYMLINK libspdk_trace.so 00:04:20.922 SYMLINK libspdk_keyring.so 00:04:21.180 CC lib/sock/sock_rpc.o 00:04:21.180 CC lib/sock/sock.o 00:04:21.180 CC lib/thread/thread.o 00:04:21.180 CC lib/thread/iobuf.o 00:04:21.746 LIB libspdk_sock.a 00:04:21.746 SO libspdk_sock.so.10.0 00:04:21.746 SYMLINK libspdk_sock.so 00:04:22.003 CC lib/nvme/nvme_ctrlr.o 00:04:22.003 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.003 CC lib/nvme/nvme_ns_cmd.o 00:04:22.003 CC lib/nvme/nvme_fabric.o 00:04:22.003 CC lib/nvme/nvme_ns.o 00:04:22.003 CC lib/nvme/nvme_pcie.o 00:04:22.003 CC lib/nvme/nvme.o 00:04:22.003 CC lib/nvme/nvme_qpair.o 00:04:22.003 CC lib/nvme/nvme_pcie_common.o 00:04:22.935 LIB libspdk_thread.a 00:04:22.935 SO libspdk_thread.so.10.1 00:04:22.935 SYMLINK libspdk_thread.so 00:04:22.935 CC lib/nvme/nvme_quirks.o 00:04:22.935 CC lib/accel/accel.o 00:04:23.193 CC lib/nvme/nvme_transport.o 00:04:23.193 CC lib/nvme/nvme_discovery.o 00:04:23.193 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.451 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.451 CC lib/accel/accel_rpc.o 00:04:23.710 CC lib/blob/blobstore.o 00:04:23.710 CC lib/init/json_config.o 00:04:23.710 CC lib/virtio/virtio.o 00:04:23.980 CC lib/virtio/virtio_vhost_user.o 00:04:23.980 CC lib/virtio/virtio_vfio_user.o 00:04:24.239 CC lib/init/subsystem.o 00:04:24.239 CC lib/nvme/nvme_tcp.o 00:04:24.239 CC lib/nvme/nvme_opal.o 00:04:24.239 CC lib/virtio/virtio_pci.o 00:04:24.496 CC lib/nvme/nvme_io_msg.o 00:04:24.496 CC lib/nvme/nvme_poll_group.o 00:04:24.496 CC lib/init/subsystem_rpc.o 00:04:24.496 CC lib/nvme/nvme_zns.o 00:04:24.496 CC lib/accel/accel_sw.o 00:04:24.753 CC lib/init/rpc.o 00:04:24.753 CC lib/nvme/nvme_stubs.o 00:04:24.753 LIB libspdk_virtio.a 00:04:25.012 SO libspdk_virtio.so.7.0 00:04:25.012 CC lib/nvme/nvme_auth.o 00:04:25.012 LIB libspdk_init.a 00:04:25.012 SYMLINK libspdk_virtio.so 00:04:25.012 CC lib/nvme/nvme_cuse.o 00:04:25.012 SO libspdk_init.so.6.0 00:04:25.268 SYMLINK libspdk_init.so 00:04:25.268 CC lib/nvme/nvme_rdma.o 00:04:25.268 LIB libspdk_accel.a 00:04:25.268 SO libspdk_accel.so.16.0 00:04:25.268 CC lib/blob/request.o 00:04:25.268 SYMLINK libspdk_accel.so 00:04:25.525 CC lib/blob/zeroes.o 00:04:25.525 CC lib/fsdev/fsdev.o 00:04:25.525 CC lib/fsdev/fsdev_io.o 00:04:25.525 CC lib/fsdev/fsdev_rpc.o 00:04:25.781 CC lib/blob/blob_bs_dev.o 00:04:26.038 CC lib/bdev/bdev.o 00:04:26.038 CC lib/bdev/bdev_rpc.o 00:04:26.038 CC lib/bdev/bdev_zone.o 00:04:26.038 CC lib/bdev/part.o 00:04:26.038 CC lib/event/app.o 00:04:26.296 CC lib/event/reactor.o 00:04:26.296 CC lib/bdev/scsi_nvme.o 00:04:26.296 LIB libspdk_fsdev.a 00:04:26.296 SO libspdk_fsdev.so.1.0 00:04:26.552 SYMLINK libspdk_fsdev.so 00:04:26.552 CC lib/event/log_rpc.o 00:04:26.552 CC lib/event/app_rpc.o 00:04:26.552 CC lib/event/scheduler_static.o 00:04:26.552 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:26.861 LIB libspdk_event.a 00:04:26.861 SO libspdk_event.so.14.0 00:04:27.118 SYMLINK libspdk_event.so 00:04:27.374 LIB libspdk_nvme.a 00:04:27.374 LIB libspdk_fuse_dispatcher.a 00:04:27.374 SO libspdk_nvme.so.14.0 00:04:27.632 SO libspdk_fuse_dispatcher.so.1.0 00:04:27.632 SYMLINK libspdk_fuse_dispatcher.so 00:04:27.890 SYMLINK libspdk_nvme.so 00:04:28.454 LIB libspdk_blob.a 00:04:28.454 SO libspdk_blob.so.11.0 00:04:28.454 SYMLINK libspdk_blob.so 00:04:28.712 CC lib/blobfs/blobfs.o 00:04:28.712 CC lib/blobfs/tree.o 00:04:28.712 CC lib/lvol/lvol.o 00:04:28.970 LIB libspdk_bdev.a 00:04:29.231 SO libspdk_bdev.so.16.0 00:04:29.231 SYMLINK libspdk_bdev.so 00:04:29.489 CC lib/ublk/ublk.o 00:04:29.489 CC lib/ublk/ublk_rpc.o 00:04:29.489 CC lib/scsi/dev.o 00:04:29.489 CC lib/scsi/lun.o 00:04:29.489 CC lib/scsi/port.o 00:04:29.489 CC lib/nbd/nbd.o 00:04:29.489 CC lib/ftl/ftl_core.o 00:04:29.489 CC lib/nvmf/ctrlr.o 00:04:29.747 CC lib/ftl/ftl_init.o 00:04:29.747 CC lib/ftl/ftl_layout.o 00:04:29.747 CC lib/scsi/scsi.o 00:04:30.005 CC lib/nbd/nbd_rpc.o 00:04:30.005 CC lib/ftl/ftl_debug.o 00:04:30.005 LIB libspdk_blobfs.a 00:04:30.005 SO libspdk_blobfs.so.10.0 00:04:30.005 LIB libspdk_lvol.a 00:04:30.005 SO libspdk_lvol.so.10.0 00:04:30.005 SYMLINK libspdk_blobfs.so 00:04:30.263 CC lib/ftl/ftl_io.o 00:04:30.263 CC lib/scsi/scsi_bdev.o 00:04:30.263 CC lib/ftl/ftl_sb.o 00:04:30.263 SYMLINK libspdk_lvol.so 00:04:30.263 CC lib/nvmf/ctrlr_discovery.o 00:04:30.263 CC lib/nvmf/ctrlr_bdev.o 00:04:30.263 CC lib/nvmf/subsystem.o 00:04:30.263 LIB libspdk_nbd.a 00:04:30.263 SO libspdk_nbd.so.7.0 00:04:30.521 CC lib/nvmf/nvmf.o 00:04:30.521 SYMLINK libspdk_nbd.so 00:04:30.521 CC lib/ftl/ftl_l2p.o 00:04:30.521 CC lib/ftl/ftl_l2p_flat.o 00:04:30.521 CC lib/ftl/ftl_nv_cache.o 00:04:30.521 LIB libspdk_ublk.a 00:04:30.779 SO libspdk_ublk.so.3.0 00:04:30.779 SYMLINK libspdk_ublk.so 00:04:30.779 CC lib/nvmf/nvmf_rpc.o 00:04:30.779 CC lib/ftl/ftl_band.o 00:04:30.779 CC lib/ftl/ftl_band_ops.o 00:04:30.779 CC lib/ftl/ftl_writer.o 00:04:31.037 CC lib/ftl/ftl_rq.o 00:04:31.037 CC lib/scsi/scsi_pr.o 00:04:31.295 CC lib/nvmf/transport.o 00:04:31.295 CC lib/ftl/ftl_reloc.o 00:04:31.295 CC lib/nvmf/tcp.o 00:04:31.295 CC lib/nvmf/stubs.o 00:04:31.295 CC lib/nvmf/mdns_server.o 00:04:31.553 CC lib/ftl/ftl_l2p_cache.o 00:04:31.553 CC lib/scsi/scsi_rpc.o 00:04:31.812 CC lib/nvmf/rdma.o 00:04:31.812 CC lib/scsi/task.o 00:04:31.812 CC lib/nvmf/auth.o 00:04:31.812 CC lib/ftl/ftl_p2l.o 00:04:32.071 CC lib/ftl/ftl_p2l_log.o 00:04:32.071 LIB libspdk_scsi.a 00:04:32.071 SO libspdk_scsi.so.9.0 00:04:32.330 SYMLINK libspdk_scsi.so 00:04:32.330 CC lib/ftl/mngt/ftl_mngt.o 00:04:32.330 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:32.330 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:32.330 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:32.589 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:32.589 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.589 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.589 CC lib/iscsi/conn.o 00:04:32.589 CC lib/vhost/vhost.o 00:04:32.589 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.589 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.869 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:32.869 CC lib/iscsi/init_grp.o 00:04:32.869 CC lib/vhost/vhost_rpc.o 00:04:33.127 CC lib/vhost/vhost_scsi.o 00:04:33.127 CC lib/vhost/vhost_blk.o 00:04:33.127 CC lib/vhost/rte_vhost_user.o 00:04:33.127 CC lib/iscsi/iscsi.o 00:04:33.386 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.644 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.644 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.644 CC lib/iscsi/param.o 00:04:33.644 CC lib/ftl/utils/ftl_conf.o 00:04:33.903 CC lib/iscsi/portal_grp.o 00:04:34.161 CC lib/iscsi/tgt_node.o 00:04:34.161 CC lib/ftl/utils/ftl_md.o 00:04:34.161 CC lib/ftl/utils/ftl_mempool.o 00:04:34.161 CC lib/ftl/utils/ftl_bitmap.o 00:04:34.418 CC lib/ftl/utils/ftl_property.o 00:04:34.418 CC lib/iscsi/iscsi_subsystem.o 00:04:34.418 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:34.418 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:34.418 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:34.677 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:34.677 CC lib/iscsi/iscsi_rpc.o 00:04:34.677 CC lib/iscsi/task.o 00:04:34.677 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:34.677 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:34.677 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:34.936 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:34.936 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:34.936 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:34.936 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:34.936 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:34.936 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:34.936 CC lib/ftl/base/ftl_base_dev.o 00:04:34.936 LIB libspdk_vhost.a 00:04:35.194 SO libspdk_vhost.so.8.0 00:04:35.194 CC lib/ftl/base/ftl_base_bdev.o 00:04:35.194 CC lib/ftl/ftl_trace.o 00:04:35.194 SYMLINK libspdk_vhost.so 00:04:35.452 LIB libspdk_nvmf.a 00:04:35.452 LIB libspdk_iscsi.a 00:04:35.452 LIB libspdk_ftl.a 00:04:35.452 SO libspdk_nvmf.so.19.0 00:04:35.710 SO libspdk_iscsi.so.8.0 00:04:35.710 SYMLINK libspdk_nvmf.so 00:04:35.710 SO libspdk_ftl.so.9.0 00:04:35.710 SYMLINK libspdk_iscsi.so 00:04:35.968 SYMLINK libspdk_ftl.so 00:04:36.533 CC module/env_dpdk/env_dpdk_rpc.o 00:04:36.533 CC module/fsdev/aio/fsdev_aio.o 00:04:36.533 CC module/blob/bdev/blob_bdev.o 00:04:36.533 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:36.533 CC module/sock/posix/posix.o 00:04:36.533 CC module/keyring/file/keyring.o 00:04:36.533 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:36.533 CC module/scheduler/gscheduler/gscheduler.o 00:04:36.533 CC module/keyring/linux/keyring.o 00:04:36.533 CC module/accel/error/accel_error.o 00:04:36.791 LIB libspdk_env_dpdk_rpc.a 00:04:36.791 SO libspdk_env_dpdk_rpc.so.6.0 00:04:36.791 CC module/keyring/linux/keyring_rpc.o 00:04:36.791 LIB libspdk_scheduler_gscheduler.a 00:04:36.791 SYMLINK libspdk_env_dpdk_rpc.so 00:04:36.791 CC module/keyring/file/keyring_rpc.o 00:04:36.791 LIB libspdk_scheduler_dynamic.a 00:04:36.791 CC module/accel/error/accel_error_rpc.o 00:04:36.791 SO libspdk_scheduler_gscheduler.so.4.0 00:04:36.791 LIB libspdk_scheduler_dpdk_governor.a 00:04:36.791 SO libspdk_scheduler_dynamic.so.4.0 00:04:36.791 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:36.791 LIB libspdk_keyring_linux.a 00:04:37.049 SYMLINK libspdk_scheduler_gscheduler.so 00:04:37.049 SYMLINK libspdk_scheduler_dynamic.so 00:04:37.049 SO libspdk_keyring_linux.so.1.0 00:04:37.049 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:37.049 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:37.049 LIB libspdk_blob_bdev.a 00:04:37.049 SO libspdk_blob_bdev.so.11.0 00:04:37.049 LIB libspdk_accel_error.a 00:04:37.049 LIB libspdk_keyring_file.a 00:04:37.049 SYMLINK libspdk_keyring_linux.so 00:04:37.049 CC module/fsdev/aio/linux_aio_mgr.o 00:04:37.049 SYMLINK libspdk_blob_bdev.so 00:04:37.049 SO libspdk_keyring_file.so.2.0 00:04:37.049 SO libspdk_accel_error.so.2.0 00:04:37.049 CC module/accel/ioat/accel_ioat.o 00:04:37.305 CC module/accel/iaa/accel_iaa.o 00:04:37.305 CC module/accel/dsa/accel_dsa.o 00:04:37.305 SYMLINK libspdk_keyring_file.so 00:04:37.305 CC module/accel/ioat/accel_ioat_rpc.o 00:04:37.305 SYMLINK libspdk_accel_error.so 00:04:37.305 CC module/accel/iaa/accel_iaa_rpc.o 00:04:37.305 CC module/accel/dsa/accel_dsa_rpc.o 00:04:37.305 LIB libspdk_fsdev_aio.a 00:04:37.305 LIB libspdk_sock_posix.a 00:04:37.562 SO libspdk_sock_posix.so.6.0 00:04:37.562 SO libspdk_fsdev_aio.so.1.0 00:04:37.562 LIB libspdk_accel_iaa.a 00:04:37.562 CC module/bdev/delay/vbdev_delay.o 00:04:37.562 CC module/blobfs/bdev/blobfs_bdev.o 00:04:37.562 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:37.562 SO libspdk_accel_iaa.so.3.0 00:04:37.562 SYMLINK libspdk_sock_posix.so 00:04:37.562 LIB libspdk_accel_ioat.a 00:04:37.562 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:37.562 SYMLINK libspdk_fsdev_aio.so 00:04:37.562 SO libspdk_accel_ioat.so.6.0 00:04:37.562 SYMLINK libspdk_accel_iaa.so 00:04:37.562 CC module/bdev/error/vbdev_error.o 00:04:37.562 SYMLINK libspdk_accel_ioat.so 00:04:37.818 CC module/bdev/gpt/gpt.o 00:04:37.818 LIB libspdk_accel_dsa.a 00:04:37.818 CC module/bdev/error/vbdev_error_rpc.o 00:04:37.818 SO libspdk_accel_dsa.so.5.0 00:04:37.818 CC module/bdev/lvol/vbdev_lvol.o 00:04:37.818 CC module/bdev/malloc/bdev_malloc.o 00:04:37.818 LIB libspdk_blobfs_bdev.a 00:04:37.818 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:37.818 SYMLINK libspdk_accel_dsa.so 00:04:37.818 SO libspdk_blobfs_bdev.so.6.0 00:04:37.818 CC module/bdev/null/bdev_null.o 00:04:38.075 CC module/bdev/gpt/vbdev_gpt.o 00:04:38.075 CC module/bdev/nvme/bdev_nvme.o 00:04:38.075 SYMLINK libspdk_blobfs_bdev.so 00:04:38.075 CC module/bdev/null/bdev_null_rpc.o 00:04:38.075 LIB libspdk_bdev_error.a 00:04:38.075 SO libspdk_bdev_error.so.6.0 00:04:38.075 LIB libspdk_bdev_delay.a 00:04:38.075 SO libspdk_bdev_delay.so.6.0 00:04:38.075 SYMLINK libspdk_bdev_error.so 00:04:38.075 CC module/bdev/passthru/vbdev_passthru.o 00:04:38.075 SYMLINK libspdk_bdev_delay.so 00:04:38.075 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:38.332 LIB libspdk_bdev_gpt.a 00:04:38.332 LIB libspdk_bdev_null.a 00:04:38.333 SO libspdk_bdev_gpt.so.6.0 00:04:38.333 CC module/bdev/raid/bdev_raid.o 00:04:38.333 SO libspdk_bdev_null.so.6.0 00:04:38.333 CC module/bdev/raid/bdev_raid_rpc.o 00:04:38.333 SYMLINK libspdk_bdev_gpt.so 00:04:38.333 CC module/bdev/split/vbdev_split.o 00:04:38.333 SYMLINK libspdk_bdev_null.so 00:04:38.333 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:38.333 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:38.590 LIB libspdk_bdev_malloc.a 00:04:38.590 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:38.590 LIB libspdk_bdev_passthru.a 00:04:38.590 SO libspdk_bdev_malloc.so.6.0 00:04:38.590 SO libspdk_bdev_passthru.so.6.0 00:04:38.590 CC module/bdev/aio/bdev_aio.o 00:04:38.590 CC module/bdev/raid/bdev_raid_sb.o 00:04:38.590 SYMLINK libspdk_bdev_malloc.so 00:04:38.590 SYMLINK libspdk_bdev_passthru.so 00:04:38.590 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:38.590 CC module/bdev/aio/bdev_aio_rpc.o 00:04:38.590 CC module/bdev/split/vbdev_split_rpc.o 00:04:38.847 CC module/bdev/raid/raid0.o 00:04:38.847 CC module/bdev/ftl/bdev_ftl.o 00:04:38.847 LIB libspdk_bdev_lvol.a 00:04:38.847 LIB libspdk_bdev_split.a 00:04:38.847 SO libspdk_bdev_lvol.so.6.0 00:04:38.847 SO libspdk_bdev_split.so.6.0 00:04:38.847 CC module/bdev/raid/raid1.o 00:04:38.847 LIB libspdk_bdev_aio.a 00:04:39.105 LIB libspdk_bdev_zone_block.a 00:04:39.105 SO libspdk_bdev_aio.so.6.0 00:04:39.105 SYMLINK libspdk_bdev_lvol.so 00:04:39.105 SO libspdk_bdev_zone_block.so.6.0 00:04:39.105 CC module/bdev/raid/concat.o 00:04:39.105 SYMLINK libspdk_bdev_split.so 00:04:39.105 SYMLINK libspdk_bdev_aio.so 00:04:39.105 CC module/bdev/nvme/nvme_rpc.o 00:04:39.105 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:39.105 SYMLINK libspdk_bdev_zone_block.so 00:04:39.105 CC module/bdev/nvme/bdev_mdns_client.o 00:04:39.362 CC module/bdev/nvme/vbdev_opal.o 00:04:39.362 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:39.362 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:39.362 CC module/bdev/iscsi/bdev_iscsi.o 00:04:39.362 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:39.362 LIB libspdk_bdev_raid.a 00:04:39.362 LIB libspdk_bdev_ftl.a 00:04:39.620 SO libspdk_bdev_raid.so.6.0 00:04:39.620 SO libspdk_bdev_ftl.so.6.0 00:04:39.620 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:39.620 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:39.620 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:39.620 SYMLINK libspdk_bdev_raid.so 00:04:39.620 SYMLINK libspdk_bdev_ftl.so 00:04:39.878 LIB libspdk_bdev_iscsi.a 00:04:39.878 SO libspdk_bdev_iscsi.so.6.0 00:04:40.142 SYMLINK libspdk_bdev_iscsi.so 00:04:40.142 LIB libspdk_bdev_virtio.a 00:04:40.408 SO libspdk_bdev_virtio.so.6.0 00:04:40.408 SYMLINK libspdk_bdev_virtio.so 00:04:40.408 LIB libspdk_bdev_nvme.a 00:04:40.666 SO libspdk_bdev_nvme.so.7.0 00:04:40.666 SYMLINK libspdk_bdev_nvme.so 00:04:41.231 CC module/event/subsystems/fsdev/fsdev.o 00:04:41.231 CC module/event/subsystems/scheduler/scheduler.o 00:04:41.231 CC module/event/subsystems/iobuf/iobuf.o 00:04:41.231 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:41.231 CC module/event/subsystems/keyring/keyring.o 00:04:41.231 CC module/event/subsystems/sock/sock.o 00:04:41.231 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:41.231 CC module/event/subsystems/vmd/vmd.o 00:04:41.231 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:41.231 LIB libspdk_event_keyring.a 00:04:41.231 LIB libspdk_event_vhost_blk.a 00:04:41.231 SO libspdk_event_keyring.so.1.0 00:04:41.231 LIB libspdk_event_fsdev.a 00:04:41.231 SO libspdk_event_vhost_blk.so.3.0 00:04:41.231 SO libspdk_event_fsdev.so.1.0 00:04:41.231 SYMLINK libspdk_event_keyring.so 00:04:41.231 SYMLINK libspdk_event_vhost_blk.so 00:04:41.231 SYMLINK libspdk_event_fsdev.so 00:04:41.231 LIB libspdk_event_iobuf.a 00:04:41.489 LIB libspdk_event_scheduler.a 00:04:41.489 SO libspdk_event_iobuf.so.3.0 00:04:41.489 SO libspdk_event_scheduler.so.4.0 00:04:41.489 LIB libspdk_event_vmd.a 00:04:41.489 LIB libspdk_event_sock.a 00:04:41.489 SO libspdk_event_vmd.so.6.0 00:04:41.489 SYMLINK libspdk_event_scheduler.so 00:04:41.489 SYMLINK libspdk_event_iobuf.so 00:04:41.489 SO libspdk_event_sock.so.5.0 00:04:41.489 SYMLINK libspdk_event_vmd.so 00:04:41.489 SYMLINK libspdk_event_sock.so 00:04:41.747 CC module/event/subsystems/accel/accel.o 00:04:41.747 LIB libspdk_event_accel.a 00:04:42.005 SO libspdk_event_accel.so.6.0 00:04:42.005 SYMLINK libspdk_event_accel.so 00:04:42.261 CC module/event/subsystems/bdev/bdev.o 00:04:42.519 LIB libspdk_event_bdev.a 00:04:42.519 SO libspdk_event_bdev.so.6.0 00:04:42.519 SYMLINK libspdk_event_bdev.so 00:04:42.776 CC module/event/subsystems/nbd/nbd.o 00:04:42.776 CC module/event/subsystems/scsi/scsi.o 00:04:42.776 CC module/event/subsystems/ublk/ublk.o 00:04:42.776 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:42.776 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:42.776 LIB libspdk_event_nbd.a 00:04:43.034 LIB libspdk_event_ublk.a 00:04:43.034 SO libspdk_event_nbd.so.6.0 00:04:43.034 LIB libspdk_event_scsi.a 00:04:43.034 SO libspdk_event_ublk.so.3.0 00:04:43.034 SO libspdk_event_scsi.so.6.0 00:04:43.034 SYMLINK libspdk_event_nbd.so 00:04:43.034 SYMLINK libspdk_event_ublk.so 00:04:43.034 SYMLINK libspdk_event_scsi.so 00:04:43.034 LIB libspdk_event_nvmf.a 00:04:43.034 SO libspdk_event_nvmf.so.6.0 00:04:43.292 SYMLINK libspdk_event_nvmf.so 00:04:43.292 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:43.292 CC module/event/subsystems/iscsi/iscsi.o 00:04:43.549 LIB libspdk_event_vhost_scsi.a 00:04:43.549 LIB libspdk_event_iscsi.a 00:04:43.549 SO libspdk_event_vhost_scsi.so.3.0 00:04:43.549 SO libspdk_event_iscsi.so.6.0 00:04:43.549 SYMLINK libspdk_event_vhost_scsi.so 00:04:43.549 SYMLINK libspdk_event_iscsi.so 00:04:43.807 SO libspdk.so.6.0 00:04:43.807 SYMLINK libspdk.so 00:04:44.064 CC app/trace_record/trace_record.o 00:04:44.064 CC app/spdk_nvme_perf/perf.o 00:04:44.064 CC app/spdk_lspci/spdk_lspci.o 00:04:44.064 CXX app/trace/trace.o 00:04:44.064 CC app/spdk_nvme_identify/identify.o 00:04:44.064 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.064 CC app/spdk_tgt/spdk_tgt.o 00:04:44.064 CC app/nvmf_tgt/nvmf_main.o 00:04:44.064 CC test/thread/poller_perf/poller_perf.o 00:04:44.322 CC examples/util/zipf/zipf.o 00:04:44.322 LINK spdk_lspci 00:04:44.322 LINK spdk_tgt 00:04:44.579 LINK poller_perf 00:04:44.579 LINK iscsi_tgt 00:04:44.579 LINK zipf 00:04:44.579 LINK spdk_trace_record 00:04:44.579 LINK nvmf_tgt 00:04:44.579 LINK spdk_trace 00:04:44.837 CC app/spdk_nvme_discover/discovery_aer.o 00:04:44.837 CC app/spdk_top/spdk_top.o 00:04:44.837 CC examples/ioat/perf/perf.o 00:04:44.837 CC app/spdk_dd/spdk_dd.o 00:04:45.094 CC test/dma/test_dma/test_dma.o 00:04:45.094 LINK spdk_nvme_discover 00:04:45.094 LINK spdk_nvme_perf 00:04:45.094 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.094 CC examples/idxd/perf/perf.o 00:04:45.352 CC app/fio/nvme/fio_plugin.o 00:04:45.352 LINK ioat_perf 00:04:45.352 CC app/fio/bdev/fio_plugin.o 00:04:45.352 LINK spdk_nvme_identify 00:04:45.352 LINK lsvmd 00:04:45.610 CC app/vhost/vhost.o 00:04:45.610 LINK spdk_dd 00:04:45.610 CC examples/ioat/verify/verify.o 00:04:45.868 LINK idxd_perf 00:04:45.868 LINK vhost 00:04:45.868 CC examples/vmd/led/led.o 00:04:45.868 LINK test_dma 00:04:46.126 CC test/app/bdev_svc/bdev_svc.o 00:04:46.126 LINK spdk_top 00:04:46.126 LINK spdk_nvme 00:04:46.126 LINK led 00:04:46.126 LINK verify 00:04:46.126 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:46.126 LINK bdev_svc 00:04:46.383 LINK spdk_bdev 00:04:46.383 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:46.383 CC examples/thread/thread/thread_ex.o 00:04:46.383 CC examples/sock/hello_world/hello_sock.o 00:04:46.383 CC test/app/histogram_perf/histogram_perf.o 00:04:46.383 CC test/app/jsoncat/jsoncat.o 00:04:46.383 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:46.383 LINK interrupt_tgt 00:04:46.383 CC test/app/stub/stub.o 00:04:46.681 LINK jsoncat 00:04:46.681 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:46.681 LINK histogram_perf 00:04:46.681 LINK thread 00:04:46.681 LINK hello_sock 00:04:46.681 LINK stub 00:04:46.681 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:46.681 LINK nvme_fuzz 00:04:46.956 TEST_HEADER include/spdk/accel.h 00:04:46.956 TEST_HEADER include/spdk/accel_module.h 00:04:46.956 TEST_HEADER include/spdk/assert.h 00:04:46.956 TEST_HEADER include/spdk/barrier.h 00:04:46.956 TEST_HEADER include/spdk/base64.h 00:04:46.956 TEST_HEADER include/spdk/bdev.h 00:04:46.956 TEST_HEADER include/spdk/bdev_module.h 00:04:46.956 TEST_HEADER include/spdk/bdev_zone.h 00:04:46.956 TEST_HEADER include/spdk/bit_array.h 00:04:46.956 TEST_HEADER include/spdk/bit_pool.h 00:04:46.956 TEST_HEADER include/spdk/blob_bdev.h 00:04:46.956 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:46.956 TEST_HEADER include/spdk/blobfs.h 00:04:46.956 TEST_HEADER include/spdk/blob.h 00:04:46.956 TEST_HEADER include/spdk/conf.h 00:04:46.956 TEST_HEADER include/spdk/config.h 00:04:46.956 TEST_HEADER include/spdk/cpuset.h 00:04:46.956 TEST_HEADER include/spdk/crc16.h 00:04:46.956 TEST_HEADER include/spdk/crc32.h 00:04:46.956 TEST_HEADER include/spdk/crc64.h 00:04:46.956 TEST_HEADER include/spdk/dif.h 00:04:46.956 TEST_HEADER include/spdk/dma.h 00:04:46.956 TEST_HEADER include/spdk/endian.h 00:04:46.956 TEST_HEADER include/spdk/env_dpdk.h 00:04:46.956 TEST_HEADER include/spdk/env.h 00:04:46.956 TEST_HEADER include/spdk/event.h 00:04:46.956 TEST_HEADER include/spdk/fd_group.h 00:04:46.956 TEST_HEADER include/spdk/fd.h 00:04:46.956 TEST_HEADER include/spdk/file.h 00:04:46.956 TEST_HEADER include/spdk/fsdev.h 00:04:46.956 TEST_HEADER include/spdk/fsdev_module.h 00:04:46.956 TEST_HEADER include/spdk/ftl.h 00:04:46.956 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:46.956 TEST_HEADER include/spdk/gpt_spec.h 00:04:46.956 TEST_HEADER include/spdk/hexlify.h 00:04:46.956 TEST_HEADER include/spdk/histogram_data.h 00:04:46.956 TEST_HEADER include/spdk/idxd.h 00:04:46.956 TEST_HEADER include/spdk/idxd_spec.h 00:04:46.956 TEST_HEADER include/spdk/init.h 00:04:46.956 TEST_HEADER include/spdk/ioat.h 00:04:46.956 TEST_HEADER include/spdk/ioat_spec.h 00:04:46.956 TEST_HEADER include/spdk/iscsi_spec.h 00:04:46.956 TEST_HEADER include/spdk/json.h 00:04:46.956 TEST_HEADER include/spdk/jsonrpc.h 00:04:46.956 TEST_HEADER include/spdk/keyring.h 00:04:46.956 TEST_HEADER include/spdk/keyring_module.h 00:04:46.956 TEST_HEADER include/spdk/likely.h 00:04:46.956 TEST_HEADER include/spdk/log.h 00:04:46.956 TEST_HEADER include/spdk/lvol.h 00:04:46.956 TEST_HEADER include/spdk/md5.h 00:04:46.956 TEST_HEADER include/spdk/memory.h 00:04:46.956 TEST_HEADER include/spdk/mmio.h 00:04:46.956 TEST_HEADER include/spdk/nbd.h 00:04:46.956 TEST_HEADER include/spdk/net.h 00:04:46.956 TEST_HEADER include/spdk/notify.h 00:04:46.956 TEST_HEADER include/spdk/nvme.h 00:04:46.956 TEST_HEADER include/spdk/nvme_intel.h 00:04:46.956 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:46.956 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:46.956 TEST_HEADER include/spdk/nvme_spec.h 00:04:46.956 TEST_HEADER include/spdk/nvme_zns.h 00:04:46.956 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:46.956 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:46.956 TEST_HEADER include/spdk/nvmf.h 00:04:46.956 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.956 TEST_HEADER include/spdk/nvmf_transport.h 00:04:46.956 TEST_HEADER include/spdk/opal.h 00:04:46.956 TEST_HEADER include/spdk/opal_spec.h 00:04:46.956 TEST_HEADER include/spdk/pci_ids.h 00:04:46.956 TEST_HEADER include/spdk/pipe.h 00:04:46.956 TEST_HEADER include/spdk/queue.h 00:04:46.956 TEST_HEADER include/spdk/reduce.h 00:04:46.956 TEST_HEADER include/spdk/rpc.h 00:04:46.956 TEST_HEADER include/spdk/scheduler.h 00:04:46.956 TEST_HEADER include/spdk/scsi.h 00:04:46.956 TEST_HEADER include/spdk/scsi_spec.h 00:04:46.956 TEST_HEADER include/spdk/sock.h 00:04:46.956 TEST_HEADER include/spdk/stdinc.h 00:04:46.956 TEST_HEADER include/spdk/string.h 00:04:46.956 TEST_HEADER include/spdk/thread.h 00:04:46.956 TEST_HEADER include/spdk/trace.h 00:04:46.956 TEST_HEADER include/spdk/trace_parser.h 00:04:46.956 TEST_HEADER include/spdk/tree.h 00:04:46.956 TEST_HEADER include/spdk/ublk.h 00:04:46.956 TEST_HEADER include/spdk/util.h 00:04:46.956 TEST_HEADER include/spdk/uuid.h 00:04:46.956 CC test/event/event_perf/event_perf.o 00:04:46.956 TEST_HEADER include/spdk/version.h 00:04:46.956 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:46.956 CC test/env/vtophys/vtophys.o 00:04:46.956 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:46.956 TEST_HEADER include/spdk/vhost.h 00:04:46.956 TEST_HEADER include/spdk/vmd.h 00:04:46.956 TEST_HEADER include/spdk/xor.h 00:04:46.956 CC test/env/mem_callbacks/mem_callbacks.o 00:04:46.956 TEST_HEADER include/spdk/zipf.h 00:04:46.956 CXX test/cpp_headers/accel.o 00:04:47.213 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:47.213 CC test/env/memory/memory_ut.o 00:04:47.213 CC test/event/reactor/reactor.o 00:04:47.213 LINK vhost_fuzz 00:04:47.213 LINK event_perf 00:04:47.213 CXX test/cpp_headers/accel_module.o 00:04:47.213 LINK vtophys 00:04:47.213 LINK env_dpdk_post_init 00:04:47.470 LINK reactor 00:04:47.470 CXX test/cpp_headers/assert.o 00:04:47.470 CXX test/cpp_headers/barrier.o 00:04:47.470 CXX test/cpp_headers/base64.o 00:04:47.470 CXX test/cpp_headers/bdev.o 00:04:47.728 CXX test/cpp_headers/bdev_module.o 00:04:47.728 CC test/event/reactor_perf/reactor_perf.o 00:04:47.728 CC test/env/pci/pci_ut.o 00:04:47.728 CXX test/cpp_headers/bdev_zone.o 00:04:47.984 LINK mem_callbacks 00:04:47.984 LINK reactor_perf 00:04:47.985 CXX test/cpp_headers/bit_array.o 00:04:47.985 CC examples/accel/perf/accel_perf.o 00:04:48.242 CC test/nvme/aer/aer.o 00:04:48.242 CC test/nvme/reset/reset.o 00:04:48.242 CC test/event/app_repeat/app_repeat.o 00:04:48.242 CC test/event/scheduler/scheduler.o 00:04:48.242 LINK pci_ut 00:04:48.242 CXX test/cpp_headers/bit_pool.o 00:04:48.242 LINK iscsi_fuzz 00:04:48.499 LINK app_repeat 00:04:48.499 CXX test/cpp_headers/blob_bdev.o 00:04:48.499 LINK scheduler 00:04:48.500 LINK aer 00:04:48.758 LINK accel_perf 00:04:48.758 LINK reset 00:04:48.758 CXX test/cpp_headers/blobfs_bdev.o 00:04:48.758 CXX test/cpp_headers/blobfs.o 00:04:48.758 CXX test/cpp_headers/blob.o 00:04:48.758 LINK memory_ut 00:04:49.016 CC test/nvme/sgl/sgl.o 00:04:49.016 CC test/nvme/e2edp/nvme_dp.o 00:04:49.016 CC test/rpc_client/rpc_client_test.o 00:04:49.016 CC test/nvme/overhead/overhead.o 00:04:49.016 CXX test/cpp_headers/conf.o 00:04:49.276 LINK rpc_client_test 00:04:49.276 LINK sgl 00:04:49.276 CC examples/blob/cli/blobcli.o 00:04:49.276 CC examples/blob/hello_world/hello_blob.o 00:04:49.276 CXX test/cpp_headers/config.o 00:04:49.276 LINK nvme_dp 00:04:49.276 CXX test/cpp_headers/cpuset.o 00:04:49.276 CC examples/nvme/hello_world/hello_world.o 00:04:49.276 LINK overhead 00:04:49.534 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:49.534 CXX test/cpp_headers/crc16.o 00:04:49.534 CXX test/cpp_headers/crc32.o 00:04:49.534 LINK hello_blob 00:04:49.793 CXX test/cpp_headers/crc64.o 00:04:49.793 CC test/nvme/err_injection/err_injection.o 00:04:49.793 LINK hello_world 00:04:49.793 CC test/nvme/startup/startup.o 00:04:49.793 CC test/nvme/reserve/reserve.o 00:04:49.793 LINK hello_fsdev 00:04:50.050 LINK blobcli 00:04:50.050 CXX test/cpp_headers/dif.o 00:04:50.050 CXX test/cpp_headers/dma.o 00:04:50.050 LINK startup 00:04:50.050 LINK reserve 00:04:50.050 LINK err_injection 00:04:50.050 CC examples/nvme/reconnect/reconnect.o 00:04:50.308 CC examples/bdev/hello_world/hello_bdev.o 00:04:50.308 CXX test/cpp_headers/endian.o 00:04:50.308 CC examples/bdev/bdevperf/bdevperf.o 00:04:50.308 CC test/nvme/simple_copy/simple_copy.o 00:04:50.308 CC test/nvme/connect_stress/connect_stress.o 00:04:50.308 CC test/nvme/boot_partition/boot_partition.o 00:04:50.308 CXX test/cpp_headers/env_dpdk.o 00:04:50.567 LINK hello_bdev 00:04:50.567 CC test/nvme/compliance/nvme_compliance.o 00:04:50.567 CC test/accel/dif/dif.o 00:04:50.567 LINK reconnect 00:04:50.567 LINK connect_stress 00:04:50.825 LINK boot_partition 00:04:50.825 CXX test/cpp_headers/env.o 00:04:50.825 LINK simple_copy 00:04:51.083 CXX test/cpp_headers/event.o 00:04:51.083 CXX test/cpp_headers/fd_group.o 00:04:51.083 CXX test/cpp_headers/fd.o 00:04:51.083 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.083 CC test/nvme/fused_ordering/fused_ordering.o 00:04:51.340 LINK nvme_compliance 00:04:51.340 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:51.340 CXX test/cpp_headers/file.o 00:04:51.603 CXX test/cpp_headers/fsdev.o 00:04:51.603 CC test/nvme/fdp/fdp.o 00:04:51.603 CC examples/nvme/arbitration/arbitration.o 00:04:51.603 LINK fused_ordering 00:04:51.603 LINK doorbell_aers 00:04:51.603 LINK bdevperf 00:04:51.871 CXX test/cpp_headers/fsdev_module.o 00:04:51.871 CC examples/nvme/hotplug/hotplug.o 00:04:51.871 LINK dif 00:04:51.871 LINK nvme_manage 00:04:51.871 CC test/nvme/cuse/cuse.o 00:04:52.129 CXX test/cpp_headers/ftl.o 00:04:52.129 LINK fdp 00:04:52.129 LINK hotplug 00:04:52.129 LINK arbitration 00:04:52.129 CXX test/cpp_headers/fuse_dispatcher.o 00:04:52.129 CXX test/cpp_headers/gpt_spec.o 00:04:52.129 CC test/blobfs/mkfs/mkfs.o 00:04:52.388 CXX test/cpp_headers/hexlify.o 00:04:52.388 CXX test/cpp_headers/histogram_data.o 00:04:52.388 CC test/lvol/esnap/esnap.o 00:04:52.647 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.647 LINK mkfs 00:04:52.647 CC examples/nvme/abort/abort.o 00:04:52.647 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.647 CXX test/cpp_headers/idxd.o 00:04:52.647 LINK cmb_copy 00:04:52.905 CC test/bdev/bdevio/bdevio.o 00:04:52.905 CXX test/cpp_headers/idxd_spec.o 00:04:52.905 CXX test/cpp_headers/init.o 00:04:52.905 LINK pmr_persistence 00:04:52.905 CXX test/cpp_headers/ioat.o 00:04:52.905 CXX test/cpp_headers/ioat_spec.o 00:04:52.905 LINK abort 00:04:53.163 CXX test/cpp_headers/iscsi_spec.o 00:04:53.163 CXX test/cpp_headers/json.o 00:04:53.163 CXX test/cpp_headers/jsonrpc.o 00:04:53.163 CXX test/cpp_headers/keyring.o 00:04:53.163 CXX test/cpp_headers/keyring_module.o 00:04:53.163 CXX test/cpp_headers/likely.o 00:04:53.421 CXX test/cpp_headers/log.o 00:04:53.421 LINK bdevio 00:04:53.421 CXX test/cpp_headers/lvol.o 00:04:53.421 CXX test/cpp_headers/md5.o 00:04:53.421 CXX test/cpp_headers/memory.o 00:04:53.421 LINK cuse 00:04:53.421 CXX test/cpp_headers/mmio.o 00:04:53.679 CC examples/nvmf/nvmf/nvmf.o 00:04:53.679 CXX test/cpp_headers/nbd.o 00:04:53.679 CXX test/cpp_headers/net.o 00:04:53.679 CXX test/cpp_headers/notify.o 00:04:53.679 CXX test/cpp_headers/nvme.o 00:04:53.679 CXX test/cpp_headers/nvme_intel.o 00:04:53.679 CXX test/cpp_headers/nvme_ocssd.o 00:04:53.679 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:53.679 CXX test/cpp_headers/nvme_spec.o 00:04:53.679 CXX test/cpp_headers/nvme_zns.o 00:04:53.937 CXX test/cpp_headers/nvmf_cmd.o 00:04:53.937 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:53.937 LINK nvmf 00:04:53.937 CXX test/cpp_headers/nvmf.o 00:04:53.937 CXX test/cpp_headers/nvmf_spec.o 00:04:53.937 CXX test/cpp_headers/nvmf_transport.o 00:04:54.195 CXX test/cpp_headers/opal.o 00:04:54.195 CXX test/cpp_headers/opal_spec.o 00:04:54.195 CXX test/cpp_headers/pci_ids.o 00:04:54.195 CXX test/cpp_headers/pipe.o 00:04:54.195 CXX test/cpp_headers/queue.o 00:04:54.195 CXX test/cpp_headers/reduce.o 00:04:54.195 CXX test/cpp_headers/rpc.o 00:04:54.454 CXX test/cpp_headers/scheduler.o 00:04:54.454 CXX test/cpp_headers/scsi.o 00:04:54.454 CXX test/cpp_headers/scsi_spec.o 00:04:54.454 CXX test/cpp_headers/sock.o 00:04:54.454 CXX test/cpp_headers/stdinc.o 00:04:54.454 CXX test/cpp_headers/string.o 00:04:54.454 CXX test/cpp_headers/thread.o 00:04:54.454 CXX test/cpp_headers/trace.o 00:04:54.712 CXX test/cpp_headers/trace_parser.o 00:04:54.712 CXX test/cpp_headers/tree.o 00:04:54.712 CXX test/cpp_headers/ublk.o 00:04:54.712 CXX test/cpp_headers/util.o 00:04:54.712 CXX test/cpp_headers/uuid.o 00:04:54.712 CXX test/cpp_headers/version.o 00:04:54.712 CXX test/cpp_headers/vfio_user_pci.o 00:04:54.712 CXX test/cpp_headers/vfio_user_spec.o 00:04:54.712 CXX test/cpp_headers/vhost.o 00:04:54.712 CXX test/cpp_headers/vmd.o 00:04:54.970 CXX test/cpp_headers/xor.o 00:04:54.970 CXX test/cpp_headers/zipf.o 00:04:58.259 LINK esnap 00:04:58.517 00:04:58.517 real 1m54.160s 00:04:58.517 user 11m24.620s 00:04:58.517 sys 2m1.459s 00:04:58.517 15:17:57 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:58.517 15:17:57 make -- common/autotest_common.sh@10 -- $ set +x 00:04:58.517 ************************************ 00:04:58.517 END TEST make 00:04:58.517 ************************************ 00:04:58.777 15:17:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:58.777 15:17:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:58.777 15:17:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:58.777 15:17:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.777 15:17:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:58.777 15:17:57 -- pm/common@44 -- $ pid=5282 00:04:58.777 15:17:57 -- pm/common@50 -- $ kill -TERM 5282 00:04:58.777 15:17:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.777 15:17:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:58.777 15:17:57 -- pm/common@44 -- $ pid=5284 00:04:58.777 15:17:57 -- pm/common@50 -- $ kill -TERM 5284 00:04:58.777 15:17:57 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.777 15:17:57 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.777 15:17:57 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.777 15:17:57 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.777 15:17:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.777 15:17:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.777 15:17:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.777 15:17:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.777 15:17:57 -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.777 15:17:57 -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.777 15:17:57 -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.777 15:17:57 -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.777 15:17:57 -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.777 15:17:57 -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.777 15:17:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.777 15:17:57 -- scripts/common.sh@344 -- # case "$op" in 00:04:58.777 15:17:57 -- scripts/common.sh@345 -- # : 1 00:04:58.777 15:17:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.777 15:17:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.777 15:17:57 -- scripts/common.sh@365 -- # decimal 1 00:04:58.777 15:17:57 -- scripts/common.sh@353 -- # local d=1 00:04:58.777 15:17:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.777 15:17:57 -- scripts/common.sh@355 -- # echo 1 00:04:58.777 15:17:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.777 15:17:57 -- scripts/common.sh@366 -- # decimal 2 00:04:58.777 15:17:57 -- scripts/common.sh@353 -- # local d=2 00:04:58.777 15:17:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.777 15:17:57 -- scripts/common.sh@355 -- # echo 2 00:04:58.777 15:17:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.777 15:17:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.777 15:17:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.777 15:17:57 -- scripts/common.sh@368 -- # return 0 00:04:58.777 15:17:57 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.777 15:17:57 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.777 --rc genhtml_branch_coverage=1 00:04:58.777 --rc genhtml_function_coverage=1 00:04:58.777 --rc genhtml_legend=1 00:04:58.777 --rc geninfo_all_blocks=1 00:04:58.777 --rc geninfo_unexecuted_blocks=1 00:04:58.777 00:04:58.777 ' 00:04:58.777 15:17:57 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.777 --rc genhtml_branch_coverage=1 00:04:58.777 --rc genhtml_function_coverage=1 00:04:58.777 --rc genhtml_legend=1 00:04:58.777 --rc geninfo_all_blocks=1 00:04:58.777 --rc geninfo_unexecuted_blocks=1 00:04:58.777 00:04:58.777 ' 00:04:58.777 15:17:57 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.777 --rc genhtml_branch_coverage=1 00:04:58.777 --rc genhtml_function_coverage=1 00:04:58.777 --rc genhtml_legend=1 00:04:58.777 --rc geninfo_all_blocks=1 00:04:58.777 --rc geninfo_unexecuted_blocks=1 00:04:58.777 00:04:58.777 ' 00:04:58.777 15:17:57 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.777 --rc genhtml_branch_coverage=1 00:04:58.777 --rc genhtml_function_coverage=1 00:04:58.777 --rc genhtml_legend=1 00:04:58.777 --rc geninfo_all_blocks=1 00:04:58.777 --rc geninfo_unexecuted_blocks=1 00:04:58.777 00:04:58.777 ' 00:04:58.777 15:17:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.777 15:17:57 -- nvmf/common.sh@7 -- # uname -s 00:04:58.777 15:17:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.777 15:17:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.777 15:17:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.777 15:17:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.777 15:17:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.777 15:17:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.777 15:17:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.777 15:17:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.777 15:17:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.777 15:17:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.777 15:17:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:04:58.777 15:17:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:04:58.777 15:17:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.777 15:17:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.777 15:17:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:58.777 15:17:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.777 15:17:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.777 15:17:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.777 15:17:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.777 15:17:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.777 15:17:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.777 15:17:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.777 15:17:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.777 15:17:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.777 15:17:57 -- paths/export.sh@5 -- # export PATH 00:04:58.777 15:17:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.778 15:17:57 -- nvmf/common.sh@51 -- # : 0 00:04:58.778 15:17:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.778 15:17:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.778 15:17:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.778 15:17:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.778 15:17:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.778 15:17:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.778 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.778 15:17:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.778 15:17:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.778 15:17:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.778 15:17:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:58.778 15:17:57 -- spdk/autotest.sh@32 -- # uname -s 00:04:58.778 15:17:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:58.778 15:17:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:58.778 15:17:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:58.778 15:17:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:58.778 15:17:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:58.778 15:17:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.038 15:17:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.038 15:17:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.038 15:17:57 -- spdk/autotest.sh@48 -- # udevadm_pid=56298 00:04:59.038 15:17:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.038 15:17:57 -- pm/common@17 -- # local monitor 00:04:59.038 15:17:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.038 15:17:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.038 15:17:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.038 15:17:57 -- pm/common@25 -- # sleep 1 00:04:59.038 15:17:57 -- pm/common@21 -- # date +%s 00:04:59.038 15:17:57 -- pm/common@21 -- # date +%s 00:04:59.038 15:17:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727795877 00:04:59.038 15:17:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727795877 00:04:59.038 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727795877_collect-cpu-load.pm.log 00:04:59.038 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727795877_collect-vmstat.pm.log 00:04:59.980 15:17:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:59.980 15:17:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:59.980 15:17:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.980 15:17:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.980 15:17:58 -- spdk/autotest.sh@59 -- # create_test_list 00:04:59.980 15:17:58 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:59.980 15:17:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.980 15:17:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:59.980 15:17:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:59.980 15:17:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:59.980 15:17:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:59.980 15:17:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:59.980 15:17:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:59.980 15:17:59 -- common/autotest_common.sh@1455 -- # uname 00:04:59.980 15:17:59 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:59.980 15:17:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:59.980 15:17:59 -- common/autotest_common.sh@1475 -- # uname 00:04:59.980 15:17:59 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:59.980 15:17:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:59.980 15:17:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:59.980 lcov: LCOV version 1.15 00:04:59.980 15:17:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:18.085 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:18.085 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:36.163 15:18:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:36.163 15:18:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.163 15:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.163 15:18:35 -- spdk/autotest.sh@78 -- # rm -f 00:05:36.163 15:18:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.680 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:36.680 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:36.680 15:18:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:36.680 15:18:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:36.680 15:18:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:36.680 15:18:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:36.680 15:18:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:36.680 15:18:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:36.680 15:18:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:36.680 15:18:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:36.680 15:18:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:36.680 15:18:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:36.680 15:18:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:36.680 15:18:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:36.680 15:18:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:36.680 15:18:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:36.680 15:18:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:36.680 15:18:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:36.680 15:18:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:36.680 15:18:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:36.680 15:18:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:36.680 15:18:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.680 15:18:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.680 15:18:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:36.680 15:18:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:36.680 15:18:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:36.680 No valid GPT data, bailing 00:05:36.680 15:18:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:36.680 15:18:35 -- scripts/common.sh@394 -- # pt= 00:05:36.680 15:18:35 -- scripts/common.sh@395 -- # return 1 00:05:36.680 15:18:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:36.680 1+0 records in 00:05:36.680 1+0 records out 00:05:36.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00312079 s, 336 MB/s 00:05:36.680 15:18:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.680 15:18:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.680 15:18:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:36.680 15:18:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:36.680 15:18:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:36.680 No valid GPT data, bailing 00:05:36.680 15:18:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:36.680 15:18:35 -- scripts/common.sh@394 -- # pt= 00:05:36.680 15:18:35 -- scripts/common.sh@395 -- # return 1 00:05:36.680 15:18:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:36.680 1+0 records in 00:05:36.680 1+0 records out 00:05:36.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00337067 s, 311 MB/s 00:05:36.680 15:18:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.680 15:18:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.680 15:18:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:36.680 15:18:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:36.680 15:18:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:36.939 No valid GPT data, bailing 00:05:36.939 15:18:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:36.939 15:18:35 -- scripts/common.sh@394 -- # pt= 00:05:36.939 15:18:35 -- scripts/common.sh@395 -- # return 1 00:05:36.939 15:18:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:36.939 1+0 records in 00:05:36.939 1+0 records out 00:05:36.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00348369 s, 301 MB/s 00:05:36.939 15:18:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.939 15:18:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.939 15:18:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:36.939 15:18:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:36.939 15:18:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:36.939 No valid GPT data, bailing 00:05:36.939 15:18:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:36.939 15:18:35 -- scripts/common.sh@394 -- # pt= 00:05:36.939 15:18:35 -- scripts/common.sh@395 -- # return 1 00:05:36.939 15:18:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:36.939 1+0 records in 00:05:36.939 1+0 records out 00:05:36.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00349105 s, 300 MB/s 00:05:36.939 15:18:35 -- spdk/autotest.sh@105 -- # sync 00:05:36.939 15:18:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:36.939 15:18:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:36.939 15:18:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:38.838 15:18:37 -- spdk/autotest.sh@111 -- # uname -s 00:05:38.838 15:18:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:38.838 15:18:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:38.838 15:18:37 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:39.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.405 Hugepages 00:05:39.405 node hugesize free / total 00:05:39.405 node0 1048576kB 0 / 0 00:05:39.405 node0 2048kB 0 / 0 00:05:39.405 00:05:39.405 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:39.405 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:39.405 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:39.405 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:39.405 15:18:38 -- spdk/autotest.sh@117 -- # uname -s 00:05:39.405 15:18:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:39.405 15:18:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:39.405 15:18:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.236 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.236 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.236 15:18:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:41.171 15:18:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:41.171 15:18:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:41.171 15:18:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:41.171 15:18:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:41.171 15:18:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:41.171 15:18:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:41.171 15:18:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.171 15:18:40 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.171 15:18:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:41.429 15:18:40 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:41.429 15:18:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:41.430 15:18:40 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.687 Waiting for block devices as requested 00:05:41.687 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.687 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.945 15:18:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:41.945 15:18:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:41.945 15:18:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:41.945 15:18:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:41.945 15:18:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:41.945 15:18:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:41.945 15:18:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:41.945 15:18:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:41.945 15:18:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:41.945 15:18:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:41.945 15:18:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:41.945 15:18:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:41.945 15:18:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:41.945 15:18:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:41.945 15:18:40 -- common/autotest_common.sh@1541 -- # continue 00:05:41.945 15:18:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:41.945 15:18:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:41.945 15:18:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:41.945 15:18:40 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:41.946 15:18:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:41.946 15:18:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:41.946 15:18:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:41.946 15:18:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:41.946 15:18:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:41.946 15:18:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:41.946 15:18:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:41.946 15:18:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:41.946 15:18:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:41.946 15:18:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:41.946 15:18:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:41.946 15:18:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:41.946 15:18:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:41.946 15:18:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:41.946 15:18:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:41.946 15:18:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:41.946 15:18:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:41.946 15:18:40 -- common/autotest_common.sh@1541 -- # continue 00:05:41.946 15:18:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:41.946 15:18:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.946 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:41.946 15:18:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:41.946 15:18:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.946 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:41.946 15:18:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.769 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.769 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.769 15:18:41 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:42.769 15:18:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.769 15:18:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.769 15:18:41 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:42.769 15:18:41 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:42.769 15:18:41 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.769 15:18:41 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:42.769 15:18:41 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:42.769 15:18:41 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:42.769 15:18:41 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:42.769 15:18:41 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:42.769 15:18:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:42.769 15:18:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:42.769 15:18:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.769 15:18:41 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.769 15:18:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:42.769 15:18:41 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:42.769 15:18:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:42.769 15:18:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:42.769 15:18:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:42.769 15:18:41 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:42.769 15:18:41 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.769 15:18:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:42.769 15:18:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:42.769 15:18:41 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:42.769 15:18:41 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.769 15:18:41 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:42.769 15:18:41 -- common/autotest_common.sh@1570 -- # return 0 00:05:42.769 15:18:41 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:42.769 15:18:41 -- common/autotest_common.sh@1578 -- # return 0 00:05:42.769 15:18:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:42.769 15:18:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:42.769 15:18:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.769 15:18:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.769 15:18:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:42.769 15:18:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.769 15:18:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.769 15:18:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:42.770 15:18:41 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.770 15:18:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.770 15:18:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.770 15:18:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.770 ************************************ 00:05:42.770 START TEST env 00:05:42.770 ************************************ 00:05:42.770 15:18:41 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.028 * Looking for test storage... 00:05:43.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.028 15:18:41 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.028 15:18:41 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.028 15:18:41 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.028 15:18:42 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.028 15:18:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.028 15:18:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.028 15:18:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.028 15:18:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.028 15:18:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.028 15:18:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.028 15:18:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.028 15:18:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.028 15:18:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.028 15:18:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.028 15:18:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.028 15:18:42 env -- scripts/common.sh@344 -- # case "$op" in 00:05:43.028 15:18:42 env -- scripts/common.sh@345 -- # : 1 00:05:43.028 15:18:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.028 15:18:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.028 15:18:42 env -- scripts/common.sh@365 -- # decimal 1 00:05:43.028 15:18:42 env -- scripts/common.sh@353 -- # local d=1 00:05:43.028 15:18:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.028 15:18:42 env -- scripts/common.sh@355 -- # echo 1 00:05:43.028 15:18:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.028 15:18:42 env -- scripts/common.sh@366 -- # decimal 2 00:05:43.028 15:18:42 env -- scripts/common.sh@353 -- # local d=2 00:05:43.028 15:18:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.028 15:18:42 env -- scripts/common.sh@355 -- # echo 2 00:05:43.028 15:18:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.028 15:18:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.028 15:18:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.028 15:18:42 env -- scripts/common.sh@368 -- # return 0 00:05:43.028 15:18:42 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.028 15:18:42 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.028 --rc genhtml_branch_coverage=1 00:05:43.028 --rc genhtml_function_coverage=1 00:05:43.028 --rc genhtml_legend=1 00:05:43.028 --rc geninfo_all_blocks=1 00:05:43.028 --rc geninfo_unexecuted_blocks=1 00:05:43.028 00:05:43.028 ' 00:05:43.028 15:18:42 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.028 --rc genhtml_branch_coverage=1 00:05:43.028 --rc genhtml_function_coverage=1 00:05:43.028 --rc genhtml_legend=1 00:05:43.028 --rc geninfo_all_blocks=1 00:05:43.028 --rc geninfo_unexecuted_blocks=1 00:05:43.028 00:05:43.028 ' 00:05:43.028 15:18:42 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.028 --rc genhtml_branch_coverage=1 00:05:43.028 --rc genhtml_function_coverage=1 00:05:43.028 --rc genhtml_legend=1 00:05:43.028 --rc geninfo_all_blocks=1 00:05:43.028 --rc geninfo_unexecuted_blocks=1 00:05:43.028 00:05:43.028 ' 00:05:43.028 15:18:42 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.028 --rc genhtml_branch_coverage=1 00:05:43.028 --rc genhtml_function_coverage=1 00:05:43.028 --rc genhtml_legend=1 00:05:43.028 --rc geninfo_all_blocks=1 00:05:43.028 --rc geninfo_unexecuted_blocks=1 00:05:43.028 00:05:43.028 ' 00:05:43.029 15:18:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.029 15:18:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.029 15:18:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.029 15:18:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.029 ************************************ 00:05:43.029 START TEST env_memory 00:05:43.029 ************************************ 00:05:43.029 15:18:42 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.029 00:05:43.029 00:05:43.029 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.029 http://cunit.sourceforge.net/ 00:05:43.029 00:05:43.029 00:05:43.029 Suite: memory 00:05:43.029 Test: alloc and free memory map ...[2024-10-01 15:18:42.163089] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.029 passed 00:05:43.029 Test: mem map translation ...[2024-10-01 15:18:42.195019] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.029 [2024-10-01 15:18:42.195073] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.029 [2024-10-01 15:18:42.195120] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.029 [2024-10-01 15:18:42.195130] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.286 passed 00:05:43.286 Test: mem map registration ...[2024-10-01 15:18:42.249314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:43.286 [2024-10-01 15:18:42.249385] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:43.286 passed 00:05:43.286 Test: mem map adjacent registrations ...passed 00:05:43.286 00:05:43.286 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.286 suites 1 1 n/a 0 0 00:05:43.286 tests 4 4 4 0 0 00:05:43.286 asserts 152 152 152 0 n/a 00:05:43.286 00:05:43.286 Elapsed time = 0.232 seconds 00:05:43.286 00:05:43.286 real 0m0.261s 00:05:43.286 user 0m0.227s 00:05:43.286 sys 0m0.019s 00:05:43.286 15:18:42 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.286 15:18:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:43.286 ************************************ 00:05:43.286 END TEST env_memory 00:05:43.286 ************************************ 00:05:43.286 15:18:42 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.286 15:18:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.286 15:18:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.286 15:18:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.286 ************************************ 00:05:43.286 START TEST env_vtophys 00:05:43.286 ************************************ 00:05:43.286 15:18:42 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.286 EAL: lib.eal log level changed from notice to debug 00:05:43.286 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 1 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 2 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 3 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 4 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 5 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 6 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 7 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 8 as core 0 on socket 0 00:05:43.286 EAL: Detected lcore 9 as core 0 on socket 0 00:05:43.286 EAL: Maximum logical cores by configuration: 128 00:05:43.286 EAL: Detected CPU lcores: 10 00:05:43.286 EAL: Detected NUMA nodes: 1 00:05:43.286 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:43.286 EAL: Detected shared linkage of DPDK 00:05:43.286 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.286 EAL: Selected IOVA mode 'PA' 00:05:43.286 EAL: Probing VFIO support... 00:05:43.286 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.286 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:43.286 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.286 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.286 EAL: Setting up physically contiguous memory... 00:05:43.286 EAL: Setting maximum number of open files to 524288 00:05:43.286 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.286 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.286 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.286 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.286 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.286 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.286 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.286 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.286 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.286 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.286 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.286 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.286 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.286 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.286 EAL: Hugepages will be freed exactly as allocated. 00:05:43.286 EAL: No shared files mode enabled, IPC is disabled 00:05:43.286 EAL: No shared files mode enabled, IPC is disabled 00:05:43.544 EAL: TSC frequency is ~2200000 KHz 00:05:43.544 EAL: Main lcore 0 is ready (tid=7fd879279a00;cpuset=[0]) 00:05:43.544 EAL: Trying to obtain current memory policy. 00:05:43.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.544 EAL: Restoring previous memory policy: 0 00:05:43.544 EAL: request: mp_malloc_sync 00:05:43.544 EAL: No shared files mode enabled, IPC is disabled 00:05:43.544 EAL: Heap on socket 0 was expanded by 2MB 00:05:43.544 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.544 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:43.544 EAL: Mem event callback 'spdk:(nil)' registered 00:05:43.544 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:43.544 00:05:43.544 00:05:43.544 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.544 http://cunit.sourceforge.net/ 00:05:43.544 00:05:43.544 00:05:43.544 Suite: components_suite 00:05:43.544 Test: vtophys_malloc_test ...passed 00:05:43.544 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:43.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.544 EAL: Restoring previous memory policy: 4 00:05:43.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.544 EAL: request: mp_malloc_sync 00:05:43.544 EAL: No shared files mode enabled, IPC is disabled 00:05:43.544 EAL: Heap on socket 0 was expanded by 4MB 00:05:43.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.544 EAL: request: mp_malloc_sync 00:05:43.544 EAL: No shared files mode enabled, IPC is disabled 00:05:43.544 EAL: Heap on socket 0 was shrunk by 4MB 00:05:43.544 EAL: Trying to obtain current memory policy. 00:05:43.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 6MB 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was shrunk by 6MB 00:05:43.545 EAL: Trying to obtain current memory policy. 00:05:43.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 10MB 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was shrunk by 10MB 00:05:43.545 EAL: Trying to obtain current memory policy. 00:05:43.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 18MB 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was shrunk by 18MB 00:05:43.545 EAL: Trying to obtain current memory policy. 00:05:43.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 34MB 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was shrunk by 34MB 00:05:43.545 EAL: Trying to obtain current memory policy. 00:05:43.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 66MB 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was shrunk by 66MB 00:05:43.545 EAL: Trying to obtain current memory policy. 00:05:43.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 130MB 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was shrunk by 130MB 00:05:43.545 EAL: Trying to obtain current memory policy. 00:05:43.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.545 EAL: Restoring previous memory policy: 4 00:05:43.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.545 EAL: request: mp_malloc_sync 00:05:43.545 EAL: No shared files mode enabled, IPC is disabled 00:05:43.545 EAL: Heap on socket 0 was expanded by 258MB 00:05:43.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.803 EAL: request: mp_malloc_sync 00:05:43.803 EAL: No shared files mode enabled, IPC is disabled 00:05:43.803 EAL: Heap on socket 0 was shrunk by 258MB 00:05:43.803 EAL: Trying to obtain current memory policy. 00:05:43.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.803 EAL: Restoring previous memory policy: 4 00:05:43.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.803 EAL: request: mp_malloc_sync 00:05:43.803 EAL: No shared files mode enabled, IPC is disabled 00:05:43.803 EAL: Heap on socket 0 was expanded by 514MB 00:05:43.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.803 EAL: request: mp_malloc_sync 00:05:43.804 EAL: No shared files mode enabled, IPC is disabled 00:05:43.804 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.804 EAL: Trying to obtain current memory policy. 00:05:43.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.063 EAL: Restoring previous memory policy: 4 00:05:44.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.063 EAL: request: mp_malloc_sync 00:05:44.063 EAL: No shared files mode enabled, IPC is disabled 00:05:44.063 EAL: Heap on socket 0 was expanded by 1026MB 00:05:44.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.321 EAL: request: mp_malloc_sync 00:05:44.321 EAL: No shared files mode enabled, IPC is disabled 00:05:44.321 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:44.321 passed 00:05:44.321 00:05:44.321 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.321 suites 1 1 n/a 0 0 00:05:44.321 tests 2 2 2 0 0 00:05:44.321 asserts 5540 5540 5540 0 n/a 00:05:44.321 00:05:44.321 Elapsed time = 0.676 seconds 00:05:44.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.321 EAL: request: mp_malloc_sync 00:05:44.321 EAL: No shared files mode enabled, IPC is disabled 00:05:44.321 EAL: Heap on socket 0 was shrunk by 2MB 00:05:44.321 EAL: No shared files mode enabled, IPC is disabled 00:05:44.321 EAL: No shared files mode enabled, IPC is disabled 00:05:44.321 EAL: No shared files mode enabled, IPC is disabled 00:05:44.321 00:05:44.321 real 0m0.893s 00:05:44.321 user 0m0.454s 00:05:44.321 sys 0m0.305s 00:05:44.321 15:18:43 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.321 15:18:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:44.321 ************************************ 00:05:44.321 END TEST env_vtophys 00:05:44.321 ************************************ 00:05:44.321 15:18:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:44.321 15:18:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.321 15:18:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.321 15:18:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.321 ************************************ 00:05:44.321 START TEST env_pci 00:05:44.321 ************************************ 00:05:44.321 15:18:43 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:44.321 00:05:44.321 00:05:44.321 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.321 http://cunit.sourceforge.net/ 00:05:44.321 00:05:44.321 00:05:44.321 Suite: pci 00:05:44.321 Test: pci_hook ...[2024-10-01 15:18:43.345212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58580 has claimed it 00:05:44.321 passed 00:05:44.321 00:05:44.321 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.321 suites 1 1 n/a 0 0 00:05:44.321 tests 1 1 1 0 0 00:05:44.321 asserts 25 25 25 0 n/a 00:05:44.321 00:05:44.321 Elapsed time = 0.002 secondsEAL: Cannot find device (10000:00:01.0) 00:05:44.321 EAL: Failed to attach device on primary process 00:05:44.321 00:05:44.321 00:05:44.321 real 0m0.020s 00:05:44.321 user 0m0.008s 00:05:44.321 sys 0m0.012s 00:05:44.321 15:18:43 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.321 15:18:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:44.321 ************************************ 00:05:44.321 END TEST env_pci 00:05:44.321 ************************************ 00:05:44.321 15:18:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:44.321 15:18:43 env -- env/env.sh@15 -- # uname 00:05:44.321 15:18:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:44.321 15:18:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:44.321 15:18:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.321 15:18:43 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:44.321 15:18:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.321 15:18:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.321 ************************************ 00:05:44.321 START TEST env_dpdk_post_init 00:05:44.321 ************************************ 00:05:44.321 15:18:43 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.321 EAL: Detected CPU lcores: 10 00:05:44.321 EAL: Detected NUMA nodes: 1 00:05:44.321 EAL: Detected shared linkage of DPDK 00:05:44.321 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.321 EAL: Selected IOVA mode 'PA' 00:05:44.581 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.581 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:44.581 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:44.581 Starting DPDK initialization... 00:05:44.581 Starting SPDK post initialization... 00:05:44.581 SPDK NVMe probe 00:05:44.581 Attaching to 0000:00:10.0 00:05:44.581 Attaching to 0000:00:11.0 00:05:44.581 Attached to 0000:00:10.0 00:05:44.581 Attached to 0000:00:11.0 00:05:44.581 Cleaning up... 00:05:44.581 00:05:44.581 real 0m0.171s 00:05:44.581 user 0m0.042s 00:05:44.581 sys 0m0.029s 00:05:44.581 15:18:43 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.581 15:18:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.581 ************************************ 00:05:44.581 END TEST env_dpdk_post_init 00:05:44.581 ************************************ 00:05:44.581 15:18:43 env -- env/env.sh@26 -- # uname 00:05:44.581 15:18:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.581 15:18:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.581 15:18:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.581 15:18:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.581 15:18:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.581 ************************************ 00:05:44.581 START TEST env_mem_callbacks 00:05:44.581 ************************************ 00:05:44.581 15:18:43 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.581 EAL: Detected CPU lcores: 10 00:05:44.581 EAL: Detected NUMA nodes: 1 00:05:44.581 EAL: Detected shared linkage of DPDK 00:05:44.581 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.581 EAL: Selected IOVA mode 'PA' 00:05:44.581 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.581 00:05:44.581 00:05:44.581 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.581 http://cunit.sourceforge.net/ 00:05:44.581 00:05:44.840 00:05:44.840 Suite: memory 00:05:44.840 Test: test ... 00:05:44.840 register 0x200000200000 2097152 00:05:44.840 malloc 3145728 00:05:44.840 register 0x200000400000 4194304 00:05:44.840 buf 0x200000500000 len 3145728 PASSED 00:05:44.840 malloc 64 00:05:44.840 buf 0x2000004fff40 len 64 PASSED 00:05:44.840 malloc 4194304 00:05:44.840 register 0x200000800000 6291456 00:05:44.840 buf 0x200000a00000 len 4194304 PASSED 00:05:44.840 free 0x200000500000 3145728 00:05:44.840 free 0x2000004fff40 64 00:05:44.840 unregister 0x200000400000 4194304 PASSED 00:05:44.840 free 0x200000a00000 4194304 00:05:44.840 unregister 0x200000800000 6291456 PASSED 00:05:44.840 malloc 8388608 00:05:44.840 register 0x200000400000 10485760 00:05:44.840 buf 0x200000600000 len 8388608 PASSED 00:05:44.840 free 0x200000600000 8388608 00:05:44.840 unregister 0x200000400000 10485760 PASSED 00:05:44.840 passed 00:05:44.840 00:05:44.840 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.840 suites 1 1 n/a 0 0 00:05:44.840 tests 1 1 1 0 0 00:05:44.840 asserts 15 15 15 0 n/a 00:05:44.840 00:05:44.840 Elapsed time = 0.007 seconds 00:05:44.840 00:05:44.840 real 0m0.144s 00:05:44.840 user 0m0.019s 00:05:44.840 sys 0m0.023s 00:05:44.840 15:18:43 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.840 15:18:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.840 ************************************ 00:05:44.840 END TEST env_mem_callbacks 00:05:44.840 ************************************ 00:05:44.840 00:05:44.840 real 0m1.878s 00:05:44.840 user 0m0.935s 00:05:44.840 sys 0m0.587s 00:05:44.840 15:18:43 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.840 15:18:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.840 ************************************ 00:05:44.840 END TEST env 00:05:44.840 ************************************ 00:05:44.840 15:18:43 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.840 15:18:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.840 15:18:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.840 15:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.840 ************************************ 00:05:44.840 START TEST rpc 00:05:44.840 ************************************ 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.840 * Looking for test storage... 00:05:44.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.840 15:18:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.840 15:18:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.840 15:18:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.840 15:18:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.840 15:18:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.840 15:18:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.840 15:18:43 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.840 15:18:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.840 15:18:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.840 15:18:43 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.840 15:18:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.840 15:18:43 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.840 15:18:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.840 15:18:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.840 15:18:43 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.840 --rc genhtml_branch_coverage=1 00:05:44.840 --rc genhtml_function_coverage=1 00:05:44.840 --rc genhtml_legend=1 00:05:44.840 --rc geninfo_all_blocks=1 00:05:44.840 --rc geninfo_unexecuted_blocks=1 00:05:44.840 00:05:44.840 ' 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.840 --rc genhtml_branch_coverage=1 00:05:44.840 --rc genhtml_function_coverage=1 00:05:44.840 --rc genhtml_legend=1 00:05:44.840 --rc geninfo_all_blocks=1 00:05:44.840 --rc geninfo_unexecuted_blocks=1 00:05:44.840 00:05:44.840 ' 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.840 --rc genhtml_branch_coverage=1 00:05:44.840 --rc genhtml_function_coverage=1 00:05:44.840 --rc genhtml_legend=1 00:05:44.840 --rc geninfo_all_blocks=1 00:05:44.840 --rc geninfo_unexecuted_blocks=1 00:05:44.840 00:05:44.840 ' 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.840 --rc genhtml_branch_coverage=1 00:05:44.840 --rc genhtml_function_coverage=1 00:05:44.840 --rc genhtml_legend=1 00:05:44.840 --rc geninfo_all_blocks=1 00:05:44.840 --rc geninfo_unexecuted_blocks=1 00:05:44.840 00:05:44.840 ' 00:05:44.840 15:18:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58698 00:05:44.840 15:18:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.840 15:18:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.840 15:18:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58698 00:05:44.840 15:18:43 rpc -- common/autotest_common.sh@831 -- # '[' -z 58698 ']' 00:05:44.841 15:18:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.841 15:18:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.841 15:18:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.841 15:18:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.841 15:18:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.099 [2024-10-01 15:18:44.079739] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:05:45.099 [2024-10-01 15:18:44.079831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58698 ] 00:05:45.099 [2024-10-01 15:18:44.217975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.356 [2024-10-01 15:18:44.280326] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.356 [2024-10-01 15:18:44.280398] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58698' to capture a snapshot of events at runtime. 00:05:45.356 [2024-10-01 15:18:44.280412] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.356 [2024-10-01 15:18:44.280439] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.356 [2024-10-01 15:18:44.280450] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58698 for offline analysis/debug. 00:05:45.356 [2024-10-01 15:18:44.280491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.286 15:18:45 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.286 15:18:45 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.286 15:18:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.286 15:18:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.286 15:18:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:46.286 15:18:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:46.286 15:18:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.286 15:18:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.286 15:18:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.286 ************************************ 00:05:46.286 START TEST rpc_integrity 00:05:46.286 ************************************ 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.286 { 00:05:46.286 "aliases": [ 00:05:46.286 "44f6f294-3710-4d10-9cb1-0d7f6b0fe585" 00:05:46.286 ], 00:05:46.286 "assigned_rate_limits": { 00:05:46.286 "r_mbytes_per_sec": 0, 00:05:46.286 "rw_ios_per_sec": 0, 00:05:46.286 "rw_mbytes_per_sec": 0, 00:05:46.286 "w_mbytes_per_sec": 0 00:05:46.286 }, 00:05:46.286 "block_size": 512, 00:05:46.286 "claimed": false, 00:05:46.286 "driver_specific": {}, 00:05:46.286 "memory_domains": [ 00:05:46.286 { 00:05:46.286 "dma_device_id": "system", 00:05:46.286 "dma_device_type": 1 00:05:46.286 }, 00:05:46.286 { 00:05:46.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.286 "dma_device_type": 2 00:05:46.286 } 00:05:46.286 ], 00:05:46.286 "name": "Malloc0", 00:05:46.286 "num_blocks": 16384, 00:05:46.286 "product_name": "Malloc disk", 00:05:46.286 "supported_io_types": { 00:05:46.286 "abort": true, 00:05:46.286 "compare": false, 00:05:46.286 "compare_and_write": false, 00:05:46.286 "copy": true, 00:05:46.286 "flush": true, 00:05:46.286 "get_zone_info": false, 00:05:46.286 "nvme_admin": false, 00:05:46.286 "nvme_io": false, 00:05:46.286 "nvme_io_md": false, 00:05:46.286 "nvme_iov_md": false, 00:05:46.286 "read": true, 00:05:46.286 "reset": true, 00:05:46.286 "seek_data": false, 00:05:46.286 "seek_hole": false, 00:05:46.286 "unmap": true, 00:05:46.286 "write": true, 00:05:46.286 "write_zeroes": true, 00:05:46.286 "zcopy": true, 00:05:46.286 "zone_append": false, 00:05:46.286 "zone_management": false 00:05:46.286 }, 00:05:46.286 "uuid": "44f6f294-3710-4d10-9cb1-0d7f6b0fe585", 00:05:46.286 "zoned": false 00:05:46.286 } 00:05:46.286 ]' 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.286 [2024-10-01 15:18:45.308468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.286 [2024-10-01 15:18:45.308547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.286 [2024-10-01 15:18:45.308572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f63ca0 00:05:46.286 [2024-10-01 15:18:45.308582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.286 [2024-10-01 15:18:45.310181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.286 [2024-10-01 15:18:45.310220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.286 Passthru0 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.286 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.286 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.286 { 00:05:46.286 "aliases": [ 00:05:46.286 "44f6f294-3710-4d10-9cb1-0d7f6b0fe585" 00:05:46.286 ], 00:05:46.286 "assigned_rate_limits": { 00:05:46.286 "r_mbytes_per_sec": 0, 00:05:46.286 "rw_ios_per_sec": 0, 00:05:46.286 "rw_mbytes_per_sec": 0, 00:05:46.286 "w_mbytes_per_sec": 0 00:05:46.286 }, 00:05:46.286 "block_size": 512, 00:05:46.286 "claim_type": "exclusive_write", 00:05:46.286 "claimed": true, 00:05:46.286 "driver_specific": {}, 00:05:46.286 "memory_domains": [ 00:05:46.286 { 00:05:46.286 "dma_device_id": "system", 00:05:46.286 "dma_device_type": 1 00:05:46.286 }, 00:05:46.286 { 00:05:46.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.286 "dma_device_type": 2 00:05:46.286 } 00:05:46.286 ], 00:05:46.286 "name": "Malloc0", 00:05:46.286 "num_blocks": 16384, 00:05:46.286 "product_name": "Malloc disk", 00:05:46.286 "supported_io_types": { 00:05:46.286 "abort": true, 00:05:46.286 "compare": false, 00:05:46.286 "compare_and_write": false, 00:05:46.286 "copy": true, 00:05:46.286 "flush": true, 00:05:46.286 "get_zone_info": false, 00:05:46.286 "nvme_admin": false, 00:05:46.286 "nvme_io": false, 00:05:46.286 "nvme_io_md": false, 00:05:46.286 "nvme_iov_md": false, 00:05:46.286 "read": true, 00:05:46.286 "reset": true, 00:05:46.286 "seek_data": false, 00:05:46.286 "seek_hole": false, 00:05:46.286 "unmap": true, 00:05:46.286 "write": true, 00:05:46.286 "write_zeroes": true, 00:05:46.286 "zcopy": true, 00:05:46.286 "zone_append": false, 00:05:46.286 "zone_management": false 00:05:46.286 }, 00:05:46.286 "uuid": "44f6f294-3710-4d10-9cb1-0d7f6b0fe585", 00:05:46.286 "zoned": false 00:05:46.286 }, 00:05:46.286 { 00:05:46.286 "aliases": [ 00:05:46.286 "e401cde5-2cd7-5d8c-b733-f61aea20e621" 00:05:46.286 ], 00:05:46.286 "assigned_rate_limits": { 00:05:46.286 "r_mbytes_per_sec": 0, 00:05:46.286 "rw_ios_per_sec": 0, 00:05:46.287 "rw_mbytes_per_sec": 0, 00:05:46.287 "w_mbytes_per_sec": 0 00:05:46.287 }, 00:05:46.287 "block_size": 512, 00:05:46.287 "claimed": false, 00:05:46.287 "driver_specific": { 00:05:46.287 "passthru": { 00:05:46.287 "base_bdev_name": "Malloc0", 00:05:46.287 "name": "Passthru0" 00:05:46.287 } 00:05:46.287 }, 00:05:46.287 "memory_domains": [ 00:05:46.287 { 00:05:46.287 "dma_device_id": "system", 00:05:46.287 "dma_device_type": 1 00:05:46.287 }, 00:05:46.287 { 00:05:46.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.287 "dma_device_type": 2 00:05:46.287 } 00:05:46.287 ], 00:05:46.287 "name": "Passthru0", 00:05:46.287 "num_blocks": 16384, 00:05:46.287 "product_name": "passthru", 00:05:46.287 "supported_io_types": { 00:05:46.287 "abort": true, 00:05:46.287 "compare": false, 00:05:46.287 "compare_and_write": false, 00:05:46.287 "copy": true, 00:05:46.287 "flush": true, 00:05:46.287 "get_zone_info": false, 00:05:46.287 "nvme_admin": false, 00:05:46.287 "nvme_io": false, 00:05:46.287 "nvme_io_md": false, 00:05:46.287 "nvme_iov_md": false, 00:05:46.287 "read": true, 00:05:46.287 "reset": true, 00:05:46.287 "seek_data": false, 00:05:46.287 "seek_hole": false, 00:05:46.287 "unmap": true, 00:05:46.287 "write": true, 00:05:46.287 "write_zeroes": true, 00:05:46.287 "zcopy": true, 00:05:46.287 "zone_append": false, 00:05:46.287 "zone_management": false 00:05:46.287 }, 00:05:46.287 "uuid": "e401cde5-2cd7-5d8c-b733-f61aea20e621", 00:05:46.287 "zoned": false 00:05:46.287 } 00:05:46.287 ]' 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.287 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.287 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.544 ************************************ 00:05:46.544 END TEST rpc_integrity 00:05:46.544 ************************************ 00:05:46.544 15:18:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.544 00:05:46.544 real 0m0.352s 00:05:46.544 user 0m0.251s 00:05:46.544 sys 0m0.037s 00:05:46.544 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.544 15:18:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.544 15:18:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.544 15:18:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.544 15:18:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.544 15:18:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.544 ************************************ 00:05:46.544 START TEST rpc_plugins 00:05:46.544 ************************************ 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.544 { 00:05:46.544 "aliases": [ 00:05:46.544 "5587fcfa-88ce-48f2-80c4-533b5666a87b" 00:05:46.544 ], 00:05:46.544 "assigned_rate_limits": { 00:05:46.544 "r_mbytes_per_sec": 0, 00:05:46.544 "rw_ios_per_sec": 0, 00:05:46.544 "rw_mbytes_per_sec": 0, 00:05:46.544 "w_mbytes_per_sec": 0 00:05:46.544 }, 00:05:46.544 "block_size": 4096, 00:05:46.544 "claimed": false, 00:05:46.544 "driver_specific": {}, 00:05:46.544 "memory_domains": [ 00:05:46.544 { 00:05:46.544 "dma_device_id": "system", 00:05:46.544 "dma_device_type": 1 00:05:46.544 }, 00:05:46.544 { 00:05:46.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.544 "dma_device_type": 2 00:05:46.544 } 00:05:46.544 ], 00:05:46.544 "name": "Malloc1", 00:05:46.544 "num_blocks": 256, 00:05:46.544 "product_name": "Malloc disk", 00:05:46.544 "supported_io_types": { 00:05:46.544 "abort": true, 00:05:46.544 "compare": false, 00:05:46.544 "compare_and_write": false, 00:05:46.544 "copy": true, 00:05:46.544 "flush": true, 00:05:46.544 "get_zone_info": false, 00:05:46.544 "nvme_admin": false, 00:05:46.544 "nvme_io": false, 00:05:46.544 "nvme_io_md": false, 00:05:46.544 "nvme_iov_md": false, 00:05:46.544 "read": true, 00:05:46.544 "reset": true, 00:05:46.544 "seek_data": false, 00:05:46.544 "seek_hole": false, 00:05:46.544 "unmap": true, 00:05:46.544 "write": true, 00:05:46.544 "write_zeroes": true, 00:05:46.544 "zcopy": true, 00:05:46.544 "zone_append": false, 00:05:46.544 "zone_management": false 00:05:46.544 }, 00:05:46.544 "uuid": "5587fcfa-88ce-48f2-80c4-533b5666a87b", 00:05:46.544 "zoned": false 00:05:46.544 } 00:05:46.544 ]' 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.544 ************************************ 00:05:46.544 END TEST rpc_plugins 00:05:46.544 ************************************ 00:05:46.544 15:18:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.544 00:05:46.544 real 0m0.166s 00:05:46.544 user 0m0.108s 00:05:46.544 sys 0m0.021s 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.544 15:18:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.802 15:18:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.802 15:18:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.802 15:18:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.802 15:18:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.802 ************************************ 00:05:46.802 START TEST rpc_trace_cmd_test 00:05:46.802 ************************************ 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.802 "bdev": { 00:05:46.802 "mask": "0x8", 00:05:46.802 "tpoint_mask": "0xffffffffffffffff" 00:05:46.802 }, 00:05:46.802 "bdev_nvme": { 00:05:46.802 "mask": "0x4000", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "bdev_raid": { 00:05:46.802 "mask": "0x20000", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "blob": { 00:05:46.802 "mask": "0x10000", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "blobfs": { 00:05:46.802 "mask": "0x80", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "dsa": { 00:05:46.802 "mask": "0x200", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "ftl": { 00:05:46.802 "mask": "0x40", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "iaa": { 00:05:46.802 "mask": "0x1000", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "iscsi_conn": { 00:05:46.802 "mask": "0x2", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "nvme_pcie": { 00:05:46.802 "mask": "0x800", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "nvme_tcp": { 00:05:46.802 "mask": "0x2000", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "nvmf_rdma": { 00:05:46.802 "mask": "0x10", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "nvmf_tcp": { 00:05:46.802 "mask": "0x20", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "scsi": { 00:05:46.802 "mask": "0x4", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "sock": { 00:05:46.802 "mask": "0x8000", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "thread": { 00:05:46.802 "mask": "0x400", 00:05:46.802 "tpoint_mask": "0x0" 00:05:46.802 }, 00:05:46.802 "tpoint_group_mask": "0x8", 00:05:46.802 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58698" 00:05:46.802 }' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.802 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.060 ************************************ 00:05:47.060 END TEST rpc_trace_cmd_test 00:05:47.060 ************************************ 00:05:47.060 15:18:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.060 00:05:47.060 real 0m0.256s 00:05:47.060 user 0m0.223s 00:05:47.060 sys 0m0.021s 00:05:47.060 15:18:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.060 15:18:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.060 15:18:46 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:47.060 15:18:46 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:47.060 15:18:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.060 15:18:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.060 15:18:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.060 ************************************ 00:05:47.060 START TEST go_rpc 00:05:47.060 ************************************ 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["a2557319-6199-42b9-a7e9-7e7352a06402"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"a2557319-6199-42b9-a7e9-7e7352a06402","zoned":false}]' 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:47.060 15:18:46 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:47.060 00:05:47.060 real 0m0.198s 00:05:47.060 user 0m0.134s 00:05:47.060 sys 0m0.031s 00:05:47.060 ************************************ 00:05:47.060 END TEST go_rpc 00:05:47.060 ************************************ 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.060 15:18:46 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 15:18:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:47.318 15:18:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:47.318 15:18:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.318 15:18:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.318 15:18:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 ************************************ 00:05:47.318 START TEST rpc_daemon_integrity 00:05:47.318 ************************************ 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.318 { 00:05:47.318 "aliases": [ 00:05:47.318 "46241117-1aab-4552-9a6d-1980e3a98a19" 00:05:47.318 ], 00:05:47.318 "assigned_rate_limits": { 00:05:47.318 "r_mbytes_per_sec": 0, 00:05:47.318 "rw_ios_per_sec": 0, 00:05:47.318 "rw_mbytes_per_sec": 0, 00:05:47.318 "w_mbytes_per_sec": 0 00:05:47.318 }, 00:05:47.318 "block_size": 512, 00:05:47.318 "claimed": false, 00:05:47.318 "driver_specific": {}, 00:05:47.318 "memory_domains": [ 00:05:47.318 { 00:05:47.318 "dma_device_id": "system", 00:05:47.318 "dma_device_type": 1 00:05:47.318 }, 00:05:47.318 { 00:05:47.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.318 "dma_device_type": 2 00:05:47.318 } 00:05:47.318 ], 00:05:47.318 "name": "Malloc3", 00:05:47.318 "num_blocks": 16384, 00:05:47.318 "product_name": "Malloc disk", 00:05:47.318 "supported_io_types": { 00:05:47.318 "abort": true, 00:05:47.318 "compare": false, 00:05:47.318 "compare_and_write": false, 00:05:47.318 "copy": true, 00:05:47.318 "flush": true, 00:05:47.318 "get_zone_info": false, 00:05:47.318 "nvme_admin": false, 00:05:47.318 "nvme_io": false, 00:05:47.318 "nvme_io_md": false, 00:05:47.318 "nvme_iov_md": false, 00:05:47.318 "read": true, 00:05:47.318 "reset": true, 00:05:47.318 "seek_data": false, 00:05:47.318 "seek_hole": false, 00:05:47.318 "unmap": true, 00:05:47.318 "write": true, 00:05:47.318 "write_zeroes": true, 00:05:47.318 "zcopy": true, 00:05:47.318 "zone_append": false, 00:05:47.318 "zone_management": false 00:05:47.318 }, 00:05:47.318 "uuid": "46241117-1aab-4552-9a6d-1980e3a98a19", 00:05:47.318 "zoned": false 00:05:47.318 } 00:05:47.318 ]' 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 [2024-10-01 15:18:46.396835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:47.318 [2024-10-01 15:18:46.396896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.318 [2024-10-01 15:18:46.396919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fbd020 00:05:47.318 [2024-10-01 15:18:46.396930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.318 [2024-10-01 15:18:46.398482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.318 [2024-10-01 15:18:46.398523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.318 Passthru0 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.318 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.318 { 00:05:47.318 "aliases": [ 00:05:47.318 "46241117-1aab-4552-9a6d-1980e3a98a19" 00:05:47.318 ], 00:05:47.318 "assigned_rate_limits": { 00:05:47.318 "r_mbytes_per_sec": 0, 00:05:47.318 "rw_ios_per_sec": 0, 00:05:47.318 "rw_mbytes_per_sec": 0, 00:05:47.318 "w_mbytes_per_sec": 0 00:05:47.318 }, 00:05:47.318 "block_size": 512, 00:05:47.318 "claim_type": "exclusive_write", 00:05:47.318 "claimed": true, 00:05:47.318 "driver_specific": {}, 00:05:47.318 "memory_domains": [ 00:05:47.318 { 00:05:47.318 "dma_device_id": "system", 00:05:47.318 "dma_device_type": 1 00:05:47.318 }, 00:05:47.318 { 00:05:47.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.318 "dma_device_type": 2 00:05:47.318 } 00:05:47.318 ], 00:05:47.318 "name": "Malloc3", 00:05:47.318 "num_blocks": 16384, 00:05:47.318 "product_name": "Malloc disk", 00:05:47.318 "supported_io_types": { 00:05:47.318 "abort": true, 00:05:47.318 "compare": false, 00:05:47.318 "compare_and_write": false, 00:05:47.318 "copy": true, 00:05:47.318 "flush": true, 00:05:47.318 "get_zone_info": false, 00:05:47.318 "nvme_admin": false, 00:05:47.318 "nvme_io": false, 00:05:47.318 "nvme_io_md": false, 00:05:47.318 "nvme_iov_md": false, 00:05:47.318 "read": true, 00:05:47.318 "reset": true, 00:05:47.318 "seek_data": false, 00:05:47.318 "seek_hole": false, 00:05:47.318 "unmap": true, 00:05:47.318 "write": true, 00:05:47.318 "write_zeroes": true, 00:05:47.318 "zcopy": true, 00:05:47.318 "zone_append": false, 00:05:47.318 "zone_management": false 00:05:47.318 }, 00:05:47.318 "uuid": "46241117-1aab-4552-9a6d-1980e3a98a19", 00:05:47.318 "zoned": false 00:05:47.318 }, 00:05:47.318 { 00:05:47.318 "aliases": [ 00:05:47.318 "4785246d-557c-560a-abec-4227afe74d0d" 00:05:47.318 ], 00:05:47.318 "assigned_rate_limits": { 00:05:47.318 "r_mbytes_per_sec": 0, 00:05:47.318 "rw_ios_per_sec": 0, 00:05:47.318 "rw_mbytes_per_sec": 0, 00:05:47.318 "w_mbytes_per_sec": 0 00:05:47.318 }, 00:05:47.318 "block_size": 512, 00:05:47.318 "claimed": false, 00:05:47.318 "driver_specific": { 00:05:47.318 "passthru": { 00:05:47.318 "base_bdev_name": "Malloc3", 00:05:47.318 "name": "Passthru0" 00:05:47.318 } 00:05:47.318 }, 00:05:47.318 "memory_domains": [ 00:05:47.318 { 00:05:47.318 "dma_device_id": "system", 00:05:47.318 "dma_device_type": 1 00:05:47.318 }, 00:05:47.318 { 00:05:47.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.318 "dma_device_type": 2 00:05:47.318 } 00:05:47.318 ], 00:05:47.318 "name": "Passthru0", 00:05:47.318 "num_blocks": 16384, 00:05:47.318 "product_name": "passthru", 00:05:47.318 "supported_io_types": { 00:05:47.318 "abort": true, 00:05:47.318 "compare": false, 00:05:47.318 "compare_and_write": false, 00:05:47.318 "copy": true, 00:05:47.318 "flush": true, 00:05:47.318 "get_zone_info": false, 00:05:47.318 "nvme_admin": false, 00:05:47.318 "nvme_io": false, 00:05:47.318 "nvme_io_md": false, 00:05:47.318 "nvme_iov_md": false, 00:05:47.318 "read": true, 00:05:47.318 "reset": true, 00:05:47.318 "seek_data": false, 00:05:47.318 "seek_hole": false, 00:05:47.318 "unmap": true, 00:05:47.318 "write": true, 00:05:47.318 "write_zeroes": true, 00:05:47.318 "zcopy": true, 00:05:47.318 "zone_append": false, 00:05:47.318 "zone_management": false 00:05:47.318 }, 00:05:47.319 "uuid": "4785246d-557c-560a-abec-4227afe74d0d", 00:05:47.319 "zoned": false 00:05:47.319 } 00:05:47.319 ]' 00:05:47.319 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.577 ************************************ 00:05:47.577 END TEST rpc_daemon_integrity 00:05:47.577 ************************************ 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.577 00:05:47.577 real 0m0.308s 00:05:47.577 user 0m0.207s 00:05:47.577 sys 0m0.038s 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.577 15:18:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.577 15:18:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:47.577 15:18:46 rpc -- rpc/rpc.sh@84 -- # killprocess 58698 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 58698 ']' 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@954 -- # kill -0 58698 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@955 -- # uname 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58698 00:05:47.577 killing process with pid 58698 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58698' 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@969 -- # kill 58698 00:05:47.577 15:18:46 rpc -- common/autotest_common.sh@974 -- # wait 58698 00:05:47.836 ************************************ 00:05:47.836 END TEST rpc 00:05:47.836 ************************************ 00:05:47.836 00:05:47.836 real 0m3.077s 00:05:47.836 user 0m4.216s 00:05:47.836 sys 0m0.641s 00:05:47.836 15:18:46 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.836 15:18:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.836 15:18:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.836 15:18:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.836 15:18:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.836 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.836 ************************************ 00:05:47.836 START TEST skip_rpc 00:05:47.836 ************************************ 00:05:47.836 15:18:46 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.094 * Looking for test storage... 00:05:48.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.094 15:18:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.094 --rc genhtml_branch_coverage=1 00:05:48.094 --rc genhtml_function_coverage=1 00:05:48.094 --rc genhtml_legend=1 00:05:48.094 --rc geninfo_all_blocks=1 00:05:48.094 --rc geninfo_unexecuted_blocks=1 00:05:48.094 00:05:48.094 ' 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.094 --rc genhtml_branch_coverage=1 00:05:48.094 --rc genhtml_function_coverage=1 00:05:48.094 --rc genhtml_legend=1 00:05:48.094 --rc geninfo_all_blocks=1 00:05:48.094 --rc geninfo_unexecuted_blocks=1 00:05:48.094 00:05:48.094 ' 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.094 --rc genhtml_branch_coverage=1 00:05:48.094 --rc genhtml_function_coverage=1 00:05:48.094 --rc genhtml_legend=1 00:05:48.094 --rc geninfo_all_blocks=1 00:05:48.094 --rc geninfo_unexecuted_blocks=1 00:05:48.094 00:05:48.094 ' 00:05:48.094 15:18:47 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.094 --rc genhtml_branch_coverage=1 00:05:48.094 --rc genhtml_function_coverage=1 00:05:48.094 --rc genhtml_legend=1 00:05:48.094 --rc geninfo_all_blocks=1 00:05:48.094 --rc geninfo_unexecuted_blocks=1 00:05:48.094 00:05:48.094 ' 00:05:48.094 15:18:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.094 15:18:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.094 15:18:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:48.095 15:18:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.095 15:18:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.095 15:18:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.095 ************************************ 00:05:48.095 START TEST skip_rpc 00:05:48.095 ************************************ 00:05:48.095 15:18:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:48.095 15:18:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58967 00:05:48.095 15:18:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:48.095 15:18:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.095 15:18:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:48.095 [2024-10-01 15:18:47.215667] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:05:48.095 [2024-10-01 15:18:47.215765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:05:48.353 [2024-10-01 15:18:47.350325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.353 [2024-10-01 15:18:47.420092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 2024/10/01 15:18:52 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58967 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58967 ']' 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58967 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58967 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.620 killing process with pid 58967 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58967' 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58967 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58967 00:05:53.620 00:05:53.620 real 0m5.310s 00:05:53.620 user 0m5.004s 00:05:53.620 sys 0m0.198s 00:05:53.620 ************************************ 00:05:53.620 END TEST skip_rpc 00:05:53.620 ************************************ 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.620 15:18:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 15:18:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.620 15:18:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.620 15:18:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.620 15:18:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 ************************************ 00:05:53.620 START TEST skip_rpc_with_json 00:05:53.620 ************************************ 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59054 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59054 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59054 ']' 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.620 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 [2024-10-01 15:18:52.564366] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:05:53.620 [2024-10-01 15:18:52.564540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:05:53.620 [2024-10-01 15:18:52.703025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.620 [2024-10-01 15:18:52.762969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.879 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.879 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:53.879 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.879 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.879 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.879 [2024-10-01 15:18:52.944721] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.879 2024/10/01 15:18:52 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:53.879 request: 00:05:53.879 { 00:05:53.879 "method": "nvmf_get_transports", 00:05:53.879 "params": { 00:05:53.879 "trtype": "tcp" 00:05:53.879 } 00:05:53.879 } 00:05:53.879 Got JSON-RPC error response 00:05:53.879 GoRPCClient: error on JSON-RPC call 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.880 [2024-10-01 15:18:52.956803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.880 15:18:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.138 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.138 15:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.138 { 00:05:54.138 "subsystems": [ 00:05:54.138 { 00:05:54.138 "subsystem": "fsdev", 00:05:54.138 "config": [ 00:05:54.138 { 00:05:54.138 "method": "fsdev_set_opts", 00:05:54.138 "params": { 00:05:54.138 "fsdev_io_cache_size": 256, 00:05:54.138 "fsdev_io_pool_size": 65535 00:05:54.138 } 00:05:54.138 } 00:05:54.138 ] 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "subsystem": "keyring", 00:05:54.138 "config": [] 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "subsystem": "iobuf", 00:05:54.138 "config": [ 00:05:54.138 { 00:05:54.138 "method": "iobuf_set_options", 00:05:54.138 "params": { 00:05:54.138 "large_bufsize": 135168, 00:05:54.138 "large_pool_count": 1024, 00:05:54.138 "small_bufsize": 8192, 00:05:54.138 "small_pool_count": 8192 00:05:54.138 } 00:05:54.138 } 00:05:54.138 ] 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "subsystem": "sock", 00:05:54.138 "config": [ 00:05:54.138 { 00:05:54.138 "method": "sock_set_default_impl", 00:05:54.138 "params": { 00:05:54.138 "impl_name": "posix" 00:05:54.138 } 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "method": "sock_impl_set_options", 00:05:54.138 "params": { 00:05:54.138 "enable_ktls": false, 00:05:54.138 "enable_placement_id": 0, 00:05:54.138 "enable_quickack": false, 00:05:54.138 "enable_recv_pipe": true, 00:05:54.138 "enable_zerocopy_send_client": false, 00:05:54.138 "enable_zerocopy_send_server": true, 00:05:54.138 "impl_name": "ssl", 00:05:54.138 "recv_buf_size": 4096, 00:05:54.138 "send_buf_size": 4096, 00:05:54.138 "tls_version": 0, 00:05:54.138 "zerocopy_threshold": 0 00:05:54.138 } 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "method": "sock_impl_set_options", 00:05:54.138 "params": { 00:05:54.138 "enable_ktls": false, 00:05:54.138 "enable_placement_id": 0, 00:05:54.138 "enable_quickack": false, 00:05:54.138 "enable_recv_pipe": true, 00:05:54.138 "enable_zerocopy_send_client": false, 00:05:54.138 "enable_zerocopy_send_server": true, 00:05:54.138 "impl_name": "posix", 00:05:54.138 "recv_buf_size": 2097152, 00:05:54.138 "send_buf_size": 2097152, 00:05:54.138 "tls_version": 0, 00:05:54.138 "zerocopy_threshold": 0 00:05:54.138 } 00:05:54.138 } 00:05:54.138 ] 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "subsystem": "vmd", 00:05:54.138 "config": [] 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "subsystem": "accel", 00:05:54.138 "config": [ 00:05:54.138 { 00:05:54.138 "method": "accel_set_options", 00:05:54.138 "params": { 00:05:54.138 "buf_count": 2048, 00:05:54.138 "large_cache_size": 16, 00:05:54.138 "sequence_count": 2048, 00:05:54.138 "small_cache_size": 128, 00:05:54.138 "task_count": 2048 00:05:54.138 } 00:05:54.138 } 00:05:54.138 ] 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "subsystem": "bdev", 00:05:54.138 "config": [ 00:05:54.138 { 00:05:54.138 "method": "bdev_set_options", 00:05:54.138 "params": { 00:05:54.138 "bdev_auto_examine": true, 00:05:54.138 "bdev_io_cache_size": 256, 00:05:54.138 "bdev_io_pool_size": 65535, 00:05:54.138 "iobuf_large_cache_size": 16, 00:05:54.138 "iobuf_small_cache_size": 128 00:05:54.138 } 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "method": "bdev_raid_set_options", 00:05:54.138 "params": { 00:05:54.138 "process_max_bandwidth_mb_sec": 0, 00:05:54.138 "process_window_size_kb": 1024 00:05:54.138 } 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "method": "bdev_iscsi_set_options", 00:05:54.138 "params": { 00:05:54.138 "timeout_sec": 30 00:05:54.138 } 00:05:54.138 }, 00:05:54.138 { 00:05:54.138 "method": "bdev_nvme_set_options", 00:05:54.138 "params": { 00:05:54.138 "action_on_timeout": "none", 00:05:54.138 "allow_accel_sequence": false, 00:05:54.138 "arbitration_burst": 0, 00:05:54.138 "bdev_retry_count": 3, 00:05:54.138 "ctrlr_loss_timeout_sec": 0, 00:05:54.138 "delay_cmd_submit": true, 00:05:54.138 "dhchap_dhgroups": [ 00:05:54.138 "null", 00:05:54.138 "ffdhe2048", 00:05:54.138 "ffdhe3072", 00:05:54.138 "ffdhe4096", 00:05:54.138 "ffdhe6144", 00:05:54.138 "ffdhe8192" 00:05:54.138 ], 00:05:54.138 "dhchap_digests": [ 00:05:54.138 "sha256", 00:05:54.138 "sha384", 00:05:54.138 "sha512" 00:05:54.138 ], 00:05:54.138 "disable_auto_failback": false, 00:05:54.138 "fast_io_fail_timeout_sec": 0, 00:05:54.138 "generate_uuids": false, 00:05:54.138 "high_priority_weight": 0, 00:05:54.138 "io_path_stat": false, 00:05:54.138 "io_queue_requests": 0, 00:05:54.138 "keep_alive_timeout_ms": 10000, 00:05:54.138 "low_priority_weight": 0, 00:05:54.138 "medium_priority_weight": 0, 00:05:54.138 "nvme_adminq_poll_period_us": 10000, 00:05:54.138 "nvme_error_stat": false, 00:05:54.138 "nvme_ioq_poll_period_us": 0, 00:05:54.139 "rdma_cm_event_timeout_ms": 0, 00:05:54.139 "rdma_max_cq_size": 0, 00:05:54.139 "rdma_srq_size": 0, 00:05:54.139 "reconnect_delay_sec": 0, 00:05:54.139 "timeout_admin_us": 0, 00:05:54.139 "timeout_us": 0, 00:05:54.139 "transport_ack_timeout": 0, 00:05:54.139 "transport_retry_count": 4, 00:05:54.139 "transport_tos": 0 00:05:54.139 } 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "method": "bdev_nvme_set_hotplug", 00:05:54.139 "params": { 00:05:54.139 "enable": false, 00:05:54.139 "period_us": 100000 00:05:54.139 } 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "method": "bdev_wait_for_examine" 00:05:54.139 } 00:05:54.139 ] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "scsi", 00:05:54.139 "config": null 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "scheduler", 00:05:54.139 "config": [ 00:05:54.139 { 00:05:54.139 "method": "framework_set_scheduler", 00:05:54.139 "params": { 00:05:54.139 "name": "static" 00:05:54.139 } 00:05:54.139 } 00:05:54.139 ] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "vhost_scsi", 00:05:54.139 "config": [] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "vhost_blk", 00:05:54.139 "config": [] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "ublk", 00:05:54.139 "config": [] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "nbd", 00:05:54.139 "config": [] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "nvmf", 00:05:54.139 "config": [ 00:05:54.139 { 00:05:54.139 "method": "nvmf_set_config", 00:05:54.139 "params": { 00:05:54.139 "admin_cmd_passthru": { 00:05:54.139 "identify_ctrlr": false 00:05:54.139 }, 00:05:54.139 "dhchap_dhgroups": [ 00:05:54.139 "null", 00:05:54.139 "ffdhe2048", 00:05:54.139 "ffdhe3072", 00:05:54.139 "ffdhe4096", 00:05:54.139 "ffdhe6144", 00:05:54.139 "ffdhe8192" 00:05:54.139 ], 00:05:54.139 "dhchap_digests": [ 00:05:54.139 "sha256", 00:05:54.139 "sha384", 00:05:54.139 "sha512" 00:05:54.139 ], 00:05:54.139 "discovery_filter": "match_any" 00:05:54.139 } 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "method": "nvmf_set_max_subsystems", 00:05:54.139 "params": { 00:05:54.139 "max_subsystems": 1024 00:05:54.139 } 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "method": "nvmf_set_crdt", 00:05:54.139 "params": { 00:05:54.139 "crdt1": 0, 00:05:54.139 "crdt2": 0, 00:05:54.139 "crdt3": 0 00:05:54.139 } 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "method": "nvmf_create_transport", 00:05:54.139 "params": { 00:05:54.139 "abort_timeout_sec": 1, 00:05:54.139 "ack_timeout": 0, 00:05:54.139 "buf_cache_size": 4294967295, 00:05:54.139 "c2h_success": true, 00:05:54.139 "data_wr_pool_size": 0, 00:05:54.139 "dif_insert_or_strip": false, 00:05:54.139 "in_capsule_data_size": 4096, 00:05:54.139 "io_unit_size": 131072, 00:05:54.139 "max_aq_depth": 128, 00:05:54.139 "max_io_qpairs_per_ctrlr": 127, 00:05:54.139 "max_io_size": 131072, 00:05:54.139 "max_queue_depth": 128, 00:05:54.139 "num_shared_buffers": 511, 00:05:54.139 "sock_priority": 0, 00:05:54.139 "trtype": "TCP", 00:05:54.139 "zcopy": false 00:05:54.139 } 00:05:54.139 } 00:05:54.139 ] 00:05:54.139 }, 00:05:54.139 { 00:05:54.139 "subsystem": "iscsi", 00:05:54.139 "config": [ 00:05:54.139 { 00:05:54.139 "method": "iscsi_set_options", 00:05:54.139 "params": { 00:05:54.139 "allow_duplicated_isid": false, 00:05:54.139 "chap_group": 0, 00:05:54.139 "data_out_pool_size": 2048, 00:05:54.139 "default_time2retain": 20, 00:05:54.139 "default_time2wait": 2, 00:05:54.139 "disable_chap": false, 00:05:54.139 "error_recovery_level": 0, 00:05:54.139 "first_burst_length": 8192, 00:05:54.139 "immediate_data": true, 00:05:54.139 "immediate_data_pool_size": 16384, 00:05:54.139 "max_connections_per_session": 2, 00:05:54.139 "max_large_datain_per_connection": 64, 00:05:54.139 "max_queue_depth": 64, 00:05:54.139 "max_r2t_per_connection": 4, 00:05:54.139 "max_sessions": 128, 00:05:54.139 "mutual_chap": false, 00:05:54.139 "node_base": "iqn.2016-06.io.spdk", 00:05:54.139 "nop_in_interval": 30, 00:05:54.139 "nop_timeout": 60, 00:05:54.139 "pdu_pool_size": 36864, 00:05:54.139 "require_chap": false 00:05:54.139 } 00:05:54.139 } 00:05:54.139 ] 00:05:54.139 } 00:05:54.139 ] 00:05:54.139 } 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59054 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59054 ']' 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59054 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59054 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.139 killing process with pid 59054 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59054' 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59054 00:05:54.139 15:18:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59054 00:05:54.398 15:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59080 00:05:54.398 15:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.398 15:18:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59080 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59080 ']' 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59080 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59080 00:05:59.668 killing process with pid 59080 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59080' 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59080 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59080 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.668 00:05:59.668 real 0m6.251s 00:05:59.668 user 0m5.968s 00:05:59.668 sys 0m0.447s 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.668 ************************************ 00:05:59.668 END TEST skip_rpc_with_json 00:05:59.668 ************************************ 00:05:59.668 15:18:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.668 15:18:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.668 15:18:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.668 15:18:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.668 ************************************ 00:05:59.668 START TEST skip_rpc_with_delay 00:05:59.668 ************************************ 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.668 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.927 [2024-10-01 15:18:58.865962] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.927 [2024-10-01 15:18:58.866097] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:59.927 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:59.927 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.927 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.927 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.927 00:05:59.927 real 0m0.095s 00:05:59.927 user 0m0.058s 00:05:59.927 sys 0m0.036s 00:05:59.927 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.927 15:18:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.927 ************************************ 00:05:59.927 END TEST skip_rpc_with_delay 00:05:59.927 ************************************ 00:05:59.927 15:18:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.927 15:18:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.927 15:18:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.927 15:18:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.927 15:18:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.927 15:18:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.927 ************************************ 00:05:59.927 START TEST exit_on_failed_rpc_init 00:05:59.927 ************************************ 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59189 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59189 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59189 ']' 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.927 15:18:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.927 [2024-10-01 15:18:59.025764] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:05:59.927 [2024-10-01 15:18:59.025916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:06:00.186 [2024-10-01 15:18:59.169981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.186 [2024-10-01 15:18:59.229975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.444 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.444 [2024-10-01 15:18:59.476725] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:00.444 [2024-10-01 15:18:59.476855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:06:00.703 [2024-10-01 15:18:59.614589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.703 [2024-10-01 15:18:59.684849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.703 [2024-10-01 15:18:59.684951] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.703 [2024-10-01 15:18:59.684969] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.703 [2024-10-01 15:18:59.684978] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59189 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59189 ']' 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59189 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59189 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59189' 00:06:00.703 killing process with pid 59189 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59189 00:06:00.703 15:18:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59189 00:06:00.961 00:06:00.961 real 0m1.145s 00:06:00.961 user 0m1.370s 00:06:00.961 sys 0m0.291s 00:06:00.961 15:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.961 15:19:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.961 ************************************ 00:06:00.961 END TEST exit_on_failed_rpc_init 00:06:00.961 ************************************ 00:06:00.961 15:19:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.961 00:06:00.961 real 0m13.166s 00:06:00.961 user 0m12.569s 00:06:00.961 sys 0m1.163s 00:06:00.961 15:19:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.961 15:19:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.961 ************************************ 00:06:00.961 END TEST skip_rpc 00:06:00.961 ************************************ 00:06:01.220 15:19:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.220 15:19:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.220 15:19:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.220 15:19:00 -- common/autotest_common.sh@10 -- # set +x 00:06:01.220 ************************************ 00:06:01.220 START TEST rpc_client 00:06:01.220 ************************************ 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.220 * Looking for test storage... 00:06:01.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.220 15:19:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.220 15:19:00 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.221 --rc genhtml_branch_coverage=1 00:06:01.221 --rc genhtml_function_coverage=1 00:06:01.221 --rc genhtml_legend=1 00:06:01.221 --rc geninfo_all_blocks=1 00:06:01.221 --rc geninfo_unexecuted_blocks=1 00:06:01.221 00:06:01.221 ' 00:06:01.221 15:19:00 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.221 --rc genhtml_branch_coverage=1 00:06:01.221 --rc genhtml_function_coverage=1 00:06:01.221 --rc genhtml_legend=1 00:06:01.221 --rc geninfo_all_blocks=1 00:06:01.221 --rc geninfo_unexecuted_blocks=1 00:06:01.221 00:06:01.221 ' 00:06:01.221 15:19:00 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.221 --rc genhtml_branch_coverage=1 00:06:01.221 --rc genhtml_function_coverage=1 00:06:01.221 --rc genhtml_legend=1 00:06:01.221 --rc geninfo_all_blocks=1 00:06:01.221 --rc geninfo_unexecuted_blocks=1 00:06:01.221 00:06:01.221 ' 00:06:01.221 15:19:00 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.221 --rc genhtml_branch_coverage=1 00:06:01.221 --rc genhtml_function_coverage=1 00:06:01.221 --rc genhtml_legend=1 00:06:01.221 --rc geninfo_all_blocks=1 00:06:01.221 --rc geninfo_unexecuted_blocks=1 00:06:01.221 00:06:01.221 ' 00:06:01.221 15:19:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:01.221 OK 00:06:01.221 15:19:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.221 00:06:01.221 real 0m0.202s 00:06:01.221 user 0m0.132s 00:06:01.221 sys 0m0.079s 00:06:01.221 15:19:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.221 15:19:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:01.221 ************************************ 00:06:01.221 END TEST rpc_client 00:06:01.221 ************************************ 00:06:01.480 15:19:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.480 15:19:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.480 15:19:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.480 15:19:00 -- common/autotest_common.sh@10 -- # set +x 00:06:01.480 ************************************ 00:06:01.480 START TEST json_config 00:06:01.480 ************************************ 00:06:01.480 15:19:00 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.480 15:19:00 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.480 15:19:00 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.480 15:19:00 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.480 15:19:00 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.480 15:19:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.480 15:19:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.480 15:19:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.480 15:19:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.481 15:19:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.481 15:19:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.481 15:19:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.481 15:19:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.481 15:19:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.481 15:19:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:01.481 15:19:00 json_config -- scripts/common.sh@345 -- # : 1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.481 15:19:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.481 15:19:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@353 -- # local d=1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.481 15:19:00 json_config -- scripts/common.sh@355 -- # echo 1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.481 15:19:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:01.481 15:19:00 json_config -- scripts/common.sh@353 -- # local d=2 00:06:01.481 15:19:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.481 15:19:00 json_config -- scripts/common.sh@355 -- # echo 2 00:06:01.481 15:19:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.481 15:19:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.481 15:19:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.481 15:19:00 json_config -- scripts/common.sh@368 -- # return 0 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.481 15:19:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.481 15:19:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.481 15:19:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.481 15:19:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.481 15:19:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.481 15:19:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.481 15:19:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.481 15:19:00 json_config -- paths/export.sh@5 -- # export PATH 00:06:01.481 15:19:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@51 -- # : 0 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.481 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.481 15:19:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.481 INFO: JSON configuration test init 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.481 15:19:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.481 15:19:00 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.481 15:19:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.481 15:19:00 json_config -- json_config/common.sh@10 -- # shift 00:06:01.481 15:19:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.481 15:19:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.481 15:19:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.481 15:19:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.481 15:19:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.481 15:19:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59340 00:06:01.481 15:19:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.481 15:19:00 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.482 Waiting for target to run... 00:06:01.482 15:19:00 json_config -- json_config/common.sh@25 -- # waitforlisten 59340 /var/tmp/spdk_tgt.sock 00:06:01.482 15:19:00 json_config -- common/autotest_common.sh@831 -- # '[' -z 59340 ']' 00:06:01.482 15:19:00 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.482 15:19:00 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.482 15:19:00 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.482 15:19:00 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.482 15:19:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.760 [2024-10-01 15:19:00.670467] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:01.760 [2024-10-01 15:19:00.670582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:06:02.018 [2024-10-01 15:19:00.965615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.018 [2024-10-01 15:19:01.013269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.585 00:06:02.585 15:19:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.585 15:19:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:02.585 15:19:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.585 15:19:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:02.585 15:19:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:02.585 15:19:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.585 15:19:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.585 15:19:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:02.585 15:19:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:02.585 15:19:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.585 15:19:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.585 15:19:01 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:02.585 15:19:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:02.585 15:19:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:03.151 15:19:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.151 15:19:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:03.151 15:19:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:03.151 15:19:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@54 -- # sort 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:03.718 15:19:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:03.718 15:19:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:03.718 15:19:02 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:03.719 15:19:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.719 15:19:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.719 15:19:02 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:03.719 15:19:02 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:03.719 15:19:02 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:03.719 15:19:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.719 15:19:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.977 MallocForNvmf0 00:06:03.977 15:19:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.977 15:19:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:04.233 MallocForNvmf1 00:06:04.233 15:19:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:04.233 15:19:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:04.491 [2024-10-01 15:19:03.656378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.749 15:19:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.749 15:19:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.062 15:19:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.062 15:19:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.336 15:19:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.336 15:19:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.594 15:19:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:05.594 15:19:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.159 [2024-10-01 15:19:05.065115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.159 15:19:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:06.159 15:19:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.159 15:19:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.159 15:19:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:06.159 15:19:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.159 15:19:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.159 15:19:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:06.159 15:19:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.159 15:19:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.417 MallocBdevForConfigChangeCheck 00:06:06.417 15:19:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:06.417 15:19:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.417 15:19:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.417 15:19:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:06.417 15:19:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.984 INFO: shutting down applications... 00:06:06.984 15:19:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:06.984 15:19:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:06.984 15:19:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:06.984 15:19:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:06.984 15:19:05 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.242 Calling clear_iscsi_subsystem 00:06:07.242 Calling clear_nvmf_subsystem 00:06:07.242 Calling clear_nbd_subsystem 00:06:07.242 Calling clear_ublk_subsystem 00:06:07.242 Calling clear_vhost_blk_subsystem 00:06:07.242 Calling clear_vhost_scsi_subsystem 00:06:07.242 Calling clear_bdev_subsystem 00:06:07.242 15:19:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:07.242 15:19:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:07.242 15:19:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:07.242 15:19:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.242 15:19:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.242 15:19:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.810 15:19:06 json_config -- json_config/json_config.sh@352 -- # break 00:06:07.810 15:19:06 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:07.810 15:19:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:07.810 15:19:06 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.810 15:19:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.810 15:19:06 json_config -- json_config/common.sh@35 -- # [[ -n 59340 ]] 00:06:07.810 15:19:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59340 00:06:07.810 15:19:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.810 15:19:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.810 15:19:06 json_config -- json_config/common.sh@41 -- # kill -0 59340 00:06:07.810 15:19:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.377 15:19:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.377 15:19:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.377 15:19:07 json_config -- json_config/common.sh@41 -- # kill -0 59340 00:06:08.377 15:19:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.377 15:19:07 json_config -- json_config/common.sh@43 -- # break 00:06:08.377 15:19:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.377 SPDK target shutdown done 00:06:08.377 15:19:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.377 INFO: relaunching applications... 00:06:08.377 15:19:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:08.377 15:19:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.377 15:19:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.377 15:19:07 json_config -- json_config/common.sh@10 -- # shift 00:06:08.377 15:19:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.377 15:19:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.377 15:19:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.377 15:19:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.377 15:19:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.377 15:19:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59636 00:06:08.377 15:19:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.377 Waiting for target to run... 00:06:08.377 15:19:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.377 15:19:07 json_config -- json_config/common.sh@25 -- # waitforlisten 59636 /var/tmp/spdk_tgt.sock 00:06:08.377 15:19:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 59636 ']' 00:06:08.377 15:19:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.377 15:19:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.377 15:19:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.377 15:19:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.377 15:19:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.377 [2024-10-01 15:19:07.440161] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:08.377 [2024-10-01 15:19:07.440528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:06:08.653 [2024-10-01 15:19:07.742569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.653 [2024-10-01 15:19:07.797541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.219 [2024-10-01 15:19:08.123669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.219 [2024-10-01 15:19:08.155755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.478 00:06:09.478 INFO: Checking if target configuration is the same... 00:06:09.478 15:19:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.478 15:19:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:09.478 15:19:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.478 15:19:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:09.478 15:19:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.478 15:19:08 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.478 15:19:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:09.478 15:19:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.478 + '[' 2 -ne 2 ']' 00:06:09.478 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:09.478 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:09.478 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:09.478 +++ basename /dev/fd/62 00:06:09.478 ++ mktemp /tmp/62.XXX 00:06:09.478 + tmp_file_1=/tmp/62.8Ya 00:06:09.478 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.478 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.478 + tmp_file_2=/tmp/spdk_tgt_config.json.SVb 00:06:09.478 + ret=0 00:06:09.478 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.049 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.049 + diff -u /tmp/62.8Ya /tmp/spdk_tgt_config.json.SVb 00:06:10.049 INFO: JSON config files are the same 00:06:10.049 + echo 'INFO: JSON config files are the same' 00:06:10.049 + rm /tmp/62.8Ya /tmp/spdk_tgt_config.json.SVb 00:06:10.049 + exit 0 00:06:10.049 INFO: changing configuration and checking if this can be detected... 00:06:10.049 15:19:09 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:10.049 15:19:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.049 15:19:09 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.049 15:19:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.308 15:19:09 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.308 15:19:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:10.308 15:19:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.308 + '[' 2 -ne 2 ']' 00:06:10.308 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:10.308 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:10.308 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:10.308 +++ basename /dev/fd/62 00:06:10.308 ++ mktemp /tmp/62.XXX 00:06:10.308 + tmp_file_1=/tmp/62.Y7q 00:06:10.308 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.308 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.308 + tmp_file_2=/tmp/spdk_tgt_config.json.TTH 00:06:10.308 + ret=0 00:06:10.308 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.875 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.875 + diff -u /tmp/62.Y7q /tmp/spdk_tgt_config.json.TTH 00:06:10.875 + ret=1 00:06:10.875 + echo '=== Start of file: /tmp/62.Y7q ===' 00:06:10.875 + cat /tmp/62.Y7q 00:06:10.875 + echo '=== End of file: /tmp/62.Y7q ===' 00:06:10.875 + echo '' 00:06:10.875 + echo '=== Start of file: /tmp/spdk_tgt_config.json.TTH ===' 00:06:10.875 + cat /tmp/spdk_tgt_config.json.TTH 00:06:10.875 + echo '=== End of file: /tmp/spdk_tgt_config.json.TTH ===' 00:06:10.875 + echo '' 00:06:10.875 + rm /tmp/62.Y7q /tmp/spdk_tgt_config.json.TTH 00:06:10.875 + exit 1 00:06:10.875 INFO: configuration change detected. 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 59636 ]] 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.875 15:19:09 json_config -- json_config/json_config.sh@330 -- # killprocess 59636 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@950 -- # '[' -z 59636 ']' 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@954 -- # kill -0 59636 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@955 -- # uname 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59636 00:06:10.875 killing process with pid 59636 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59636' 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@969 -- # kill 59636 00:06:10.875 15:19:09 json_config -- common/autotest_common.sh@974 -- # wait 59636 00:06:11.134 15:19:10 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.134 15:19:10 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:11.134 15:19:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.134 15:19:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.134 15:19:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:11.134 INFO: Success 00:06:11.134 15:19:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:11.134 00:06:11.134 real 0m9.807s 00:06:11.134 user 0m14.927s 00:06:11.134 sys 0m1.577s 00:06:11.134 15:19:10 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.134 15:19:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.134 ************************************ 00:06:11.134 END TEST json_config 00:06:11.134 ************************************ 00:06:11.134 15:19:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.134 15:19:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.134 15:19:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.134 15:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:11.134 ************************************ 00:06:11.134 START TEST json_config_extra_key 00:06:11.134 ************************************ 00:06:11.134 15:19:10 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.393 15:19:10 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.393 15:19:10 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.393 15:19:10 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.394 --rc genhtml_branch_coverage=1 00:06:11.394 --rc genhtml_function_coverage=1 00:06:11.394 --rc genhtml_legend=1 00:06:11.394 --rc geninfo_all_blocks=1 00:06:11.394 --rc geninfo_unexecuted_blocks=1 00:06:11.394 00:06:11.394 ' 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.394 --rc genhtml_branch_coverage=1 00:06:11.394 --rc genhtml_function_coverage=1 00:06:11.394 --rc genhtml_legend=1 00:06:11.394 --rc geninfo_all_blocks=1 00:06:11.394 --rc geninfo_unexecuted_blocks=1 00:06:11.394 00:06:11.394 ' 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.394 --rc genhtml_branch_coverage=1 00:06:11.394 --rc genhtml_function_coverage=1 00:06:11.394 --rc genhtml_legend=1 00:06:11.394 --rc geninfo_all_blocks=1 00:06:11.394 --rc geninfo_unexecuted_blocks=1 00:06:11.394 00:06:11.394 ' 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.394 --rc genhtml_branch_coverage=1 00:06:11.394 --rc genhtml_function_coverage=1 00:06:11.394 --rc genhtml_legend=1 00:06:11.394 --rc geninfo_all_blocks=1 00:06:11.394 --rc geninfo_unexecuted_blocks=1 00:06:11.394 00:06:11.394 ' 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.394 15:19:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.394 15:19:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.394 15:19:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.394 15:19:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.394 15:19:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.394 15:19:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.394 15:19:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.394 INFO: launching applications... 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.394 15:19:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59820 00:06:11.394 Waiting for target to run... 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59820 /var/tmp/spdk_tgt.sock 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59820 ']' 00:06:11.394 15:19:10 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.394 15:19:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.394 [2024-10-01 15:19:10.503052] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:11.394 [2024-10-01 15:19:10.503147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:06:11.653 [2024-10-01 15:19:10.783670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.911 [2024-10-01 15:19:10.830841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.502 15:19:11 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.502 15:19:11 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:12.502 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.502 INFO: shutting down applications... 00:06:12.502 15:19:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.502 15:19:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59820 ]] 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59820 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59820 00:06:12.502 15:19:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59820 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.069 SPDK target shutdown done 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.069 15:19:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.069 Success 00:06:13.069 15:19:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.069 00:06:13.069 real 0m1.789s 00:06:13.069 user 0m1.728s 00:06:13.069 sys 0m0.324s 00:06:13.069 15:19:12 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.069 15:19:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 ************************************ 00:06:13.069 END TEST json_config_extra_key 00:06:13.069 ************************************ 00:06:13.069 15:19:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.069 15:19:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.069 15:19:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.069 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 ************************************ 00:06:13.069 START TEST alias_rpc 00:06:13.069 ************************************ 00:06:13.069 15:19:12 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.069 * Looking for test storage... 00:06:13.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:13.069 15:19:12 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.069 15:19:12 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.069 15:19:12 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.328 15:19:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.328 --rc genhtml_branch_coverage=1 00:06:13.328 --rc genhtml_function_coverage=1 00:06:13.328 --rc genhtml_legend=1 00:06:13.328 --rc geninfo_all_blocks=1 00:06:13.328 --rc geninfo_unexecuted_blocks=1 00:06:13.328 00:06:13.328 ' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.328 --rc genhtml_branch_coverage=1 00:06:13.328 --rc genhtml_function_coverage=1 00:06:13.328 --rc genhtml_legend=1 00:06:13.328 --rc geninfo_all_blocks=1 00:06:13.328 --rc geninfo_unexecuted_blocks=1 00:06:13.328 00:06:13.328 ' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.328 --rc genhtml_branch_coverage=1 00:06:13.328 --rc genhtml_function_coverage=1 00:06:13.328 --rc genhtml_legend=1 00:06:13.328 --rc geninfo_all_blocks=1 00:06:13.328 --rc geninfo_unexecuted_blocks=1 00:06:13.328 00:06:13.328 ' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.328 --rc genhtml_branch_coverage=1 00:06:13.328 --rc genhtml_function_coverage=1 00:06:13.328 --rc genhtml_legend=1 00:06:13.328 --rc geninfo_all_blocks=1 00:06:13.328 --rc geninfo_unexecuted_blocks=1 00:06:13.328 00:06:13.328 ' 00:06:13.328 15:19:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.328 15:19:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59903 00:06:13.328 15:19:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.328 15:19:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59903 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59903 ']' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.328 15:19:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.328 [2024-10-01 15:19:12.332753] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:13.328 [2024-10-01 15:19:12.332894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59903 ] 00:06:13.328 [2024-10-01 15:19:12.471122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.586 [2024-10-01 15:19:12.546231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.519 15:19:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:14.519 15:19:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59903 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59903 ']' 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59903 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59903 00:06:14.519 killing process with pid 59903 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59903' 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@969 -- # kill 59903 00:06:14.519 15:19:13 alias_rpc -- common/autotest_common.sh@974 -- # wait 59903 00:06:15.087 00:06:15.087 real 0m1.853s 00:06:15.087 user 0m2.267s 00:06:15.087 sys 0m0.343s 00:06:15.087 15:19:13 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.087 15:19:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.087 ************************************ 00:06:15.087 END TEST alias_rpc 00:06:15.087 ************************************ 00:06:15.087 15:19:13 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:06:15.087 15:19:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.087 15:19:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.087 15:19:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.087 15:19:13 -- common/autotest_common.sh@10 -- # set +x 00:06:15.087 ************************************ 00:06:15.087 START TEST dpdk_mem_utility 00:06:15.087 ************************************ 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.087 * Looking for test storage... 00:06:15.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.087 15:19:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.087 --rc genhtml_branch_coverage=1 00:06:15.087 --rc genhtml_function_coverage=1 00:06:15.087 --rc genhtml_legend=1 00:06:15.087 --rc geninfo_all_blocks=1 00:06:15.087 --rc geninfo_unexecuted_blocks=1 00:06:15.087 00:06:15.087 ' 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.087 --rc genhtml_branch_coverage=1 00:06:15.087 --rc genhtml_function_coverage=1 00:06:15.087 --rc genhtml_legend=1 00:06:15.087 --rc geninfo_all_blocks=1 00:06:15.087 --rc geninfo_unexecuted_blocks=1 00:06:15.087 00:06:15.087 ' 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.087 --rc genhtml_branch_coverage=1 00:06:15.087 --rc genhtml_function_coverage=1 00:06:15.087 --rc genhtml_legend=1 00:06:15.087 --rc geninfo_all_blocks=1 00:06:15.087 --rc geninfo_unexecuted_blocks=1 00:06:15.087 00:06:15.087 ' 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.087 --rc genhtml_branch_coverage=1 00:06:15.087 --rc genhtml_function_coverage=1 00:06:15.087 --rc genhtml_legend=1 00:06:15.087 --rc geninfo_all_blocks=1 00:06:15.087 --rc geninfo_unexecuted_blocks=1 00:06:15.087 00:06:15.087 ' 00:06:15.087 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.087 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60000 00:06:15.087 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60000 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 60000 ']' 00:06:15.087 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.087 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.346 [2024-10-01 15:19:14.262713] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:15.346 [2024-10-01 15:19:14.263454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:06:15.346 [2024-10-01 15:19:14.404557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.346 [2024-10-01 15:19:14.466235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.605 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.605 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:15.605 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:15.605 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:15.605 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.605 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.605 { 00:06:15.605 "filename": "/tmp/spdk_mem_dump.txt" 00:06:15.605 } 00:06:15.605 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.605 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.605 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:15.605 1 heaps totaling size 860.000000 MiB 00:06:15.605 size: 860.000000 MiB heap id: 0 00:06:15.605 end heaps---------- 00:06:15.605 9 mempools totaling size 642.649841 MiB 00:06:15.605 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:15.605 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:15.605 size: 92.545471 MiB name: bdev_io_60000 00:06:15.605 size: 51.011292 MiB name: evtpool_60000 00:06:15.605 size: 50.003479 MiB name: msgpool_60000 00:06:15.605 size: 36.509338 MiB name: fsdev_io_60000 00:06:15.605 size: 21.763794 MiB name: PDU_Pool 00:06:15.605 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:15.605 size: 0.026123 MiB name: Session_Pool 00:06:15.605 end mempools------- 00:06:15.605 6 memzones totaling size 4.142822 MiB 00:06:15.605 size: 1.000366 MiB name: RG_ring_0_60000 00:06:15.605 size: 1.000366 MiB name: RG_ring_1_60000 00:06:15.605 size: 1.000366 MiB name: RG_ring_4_60000 00:06:15.605 size: 1.000366 MiB name: RG_ring_5_60000 00:06:15.605 size: 0.125366 MiB name: RG_ring_2_60000 00:06:15.605 size: 0.015991 MiB name: RG_ring_3_60000 00:06:15.605 end memzones------- 00:06:15.605 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:15.865 heap id: 0 total size: 860.000000 MiB number of busy elements: 276 number of free elements: 16 00:06:15.865 list of free elements. size: 13.942200 MiB 00:06:15.865 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:15.865 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:15.865 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:15.865 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:15.865 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:15.865 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:15.865 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:15.865 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:15.865 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:15.865 element at address: 0x20001d800000 with size: 0.572815 MiB 00:06:15.865 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:15.865 element at address: 0x200003e00000 with size: 0.487732 MiB 00:06:15.865 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:15.865 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:15.866 element at address: 0x20002ac00000 with size: 0.398682 MiB 00:06:15.866 element at address: 0x200003a00000 with size: 0.351562 MiB 00:06:15.866 list of standard malloc elements. size: 199.261108 MiB 00:06:15.866 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:15.866 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:15.866 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:15.866 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:15.866 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:15.866 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:15.866 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:15.866 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:15.866 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:15.866 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a5a000 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a5e4c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7e780 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7e840 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7e900 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:15.866 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:15.866 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:15.866 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac66100 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac661c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6cdc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:15.867 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:15.867 list of memzone associated elements. size: 646.796692 MiB 00:06:15.867 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:15.867 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:15.867 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:15.867 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:15.867 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:15.867 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_60000_0 00:06:15.867 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:15.867 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60000_0 00:06:15.867 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:15.867 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60000_0 00:06:15.867 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:15.867 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60000_0 00:06:15.867 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:15.867 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:15.867 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:15.867 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:15.867 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:15.867 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60000 00:06:15.867 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:15.867 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60000 00:06:15.867 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:15.867 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60000 00:06:15.867 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:15.867 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:15.867 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:15.867 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:15.867 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:15.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:15.868 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:15.868 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:15.868 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:15.868 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60000 00:06:15.868 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:15.868 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60000 00:06:15.868 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:15.868 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60000 00:06:15.868 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:15.868 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60000 00:06:15.868 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:15.868 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60000 00:06:15.868 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:15.868 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60000 00:06:15.868 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:15.868 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:15.868 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:15.868 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:15.868 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:15.868 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:15.868 element at address: 0x200003a5e580 with size: 0.125488 MiB 00:06:15.868 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60000 00:06:15.868 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:15.868 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:15.868 element at address: 0x20002ac66280 with size: 0.023743 MiB 00:06:15.868 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:15.868 element at address: 0x200003a5a2c0 with size: 0.016113 MiB 00:06:15.868 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60000 00:06:15.868 element at address: 0x20002ac6c3c0 with size: 0.002441 MiB 00:06:15.868 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:15.868 element at address: 0x2000002d6fc0 with size: 0.000305 MiB 00:06:15.868 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60000 00:06:15.868 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:15.868 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60000 00:06:15.868 element at address: 0x200003a5a0c0 with size: 0.000305 MiB 00:06:15.868 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60000 00:06:15.868 element at address: 0x20002ac6ce80 with size: 0.000305 MiB 00:06:15.868 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:15.868 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:15.868 15:19:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60000 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 60000 ']' 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 60000 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60000 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.868 killing process with pid 60000 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60000' 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 60000 00:06:15.868 15:19:14 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 60000 00:06:16.126 ************************************ 00:06:16.126 END TEST dpdk_mem_utility 00:06:16.126 ************************************ 00:06:16.126 00:06:16.126 real 0m1.100s 00:06:16.126 user 0m1.167s 00:06:16.126 sys 0m0.329s 00:06:16.126 15:19:15 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.126 15:19:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.126 15:19:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.126 15:19:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.126 15:19:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.126 15:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:16.126 ************************************ 00:06:16.126 START TEST event 00:06:16.126 ************************************ 00:06:16.126 15:19:15 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.126 * Looking for test storage... 00:06:16.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.126 15:19:15 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.126 15:19:15 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.126 15:19:15 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.385 15:19:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.385 15:19:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.385 15:19:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.385 15:19:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.385 15:19:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.385 15:19:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.385 15:19:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.385 15:19:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.385 15:19:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.385 15:19:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.385 15:19:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.385 15:19:15 event -- scripts/common.sh@344 -- # case "$op" in 00:06:16.385 15:19:15 event -- scripts/common.sh@345 -- # : 1 00:06:16.385 15:19:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.385 15:19:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.385 15:19:15 event -- scripts/common.sh@365 -- # decimal 1 00:06:16.385 15:19:15 event -- scripts/common.sh@353 -- # local d=1 00:06:16.385 15:19:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.385 15:19:15 event -- scripts/common.sh@355 -- # echo 1 00:06:16.385 15:19:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.385 15:19:15 event -- scripts/common.sh@366 -- # decimal 2 00:06:16.385 15:19:15 event -- scripts/common.sh@353 -- # local d=2 00:06:16.385 15:19:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.385 15:19:15 event -- scripts/common.sh@355 -- # echo 2 00:06:16.385 15:19:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.385 15:19:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.385 15:19:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.385 15:19:15 event -- scripts/common.sh@368 -- # return 0 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.385 --rc genhtml_branch_coverage=1 00:06:16.385 --rc genhtml_function_coverage=1 00:06:16.385 --rc genhtml_legend=1 00:06:16.385 --rc geninfo_all_blocks=1 00:06:16.385 --rc geninfo_unexecuted_blocks=1 00:06:16.385 00:06:16.385 ' 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.385 --rc genhtml_branch_coverage=1 00:06:16.385 --rc genhtml_function_coverage=1 00:06:16.385 --rc genhtml_legend=1 00:06:16.385 --rc geninfo_all_blocks=1 00:06:16.385 --rc geninfo_unexecuted_blocks=1 00:06:16.385 00:06:16.385 ' 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.385 --rc genhtml_branch_coverage=1 00:06:16.385 --rc genhtml_function_coverage=1 00:06:16.385 --rc genhtml_legend=1 00:06:16.385 --rc geninfo_all_blocks=1 00:06:16.385 --rc geninfo_unexecuted_blocks=1 00:06:16.385 00:06:16.385 ' 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.385 --rc genhtml_branch_coverage=1 00:06:16.385 --rc genhtml_function_coverage=1 00:06:16.385 --rc genhtml_legend=1 00:06:16.385 --rc geninfo_all_blocks=1 00:06:16.385 --rc geninfo_unexecuted_blocks=1 00:06:16.385 00:06:16.385 ' 00:06:16.385 15:19:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:16.385 15:19:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.385 15:19:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:16.385 15:19:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.385 15:19:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.385 ************************************ 00:06:16.385 START TEST event_perf 00:06:16.385 ************************************ 00:06:16.385 15:19:15 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.385 Running I/O for 1 seconds...[2024-10-01 15:19:15.345674] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:16.385 [2024-10-01 15:19:15.345772] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:06:16.385 [2024-10-01 15:19:15.485764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.644 [2024-10-01 15:19:15.558723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.644 Running I/O for 1 seconds...[2024-10-01 15:19:15.558806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.644 [2024-10-01 15:19:15.558872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.644 [2024-10-01 15:19:15.558873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.576 00:06:17.576 lcore 0: 181079 00:06:17.576 lcore 1: 181078 00:06:17.576 lcore 2: 181077 00:06:17.576 lcore 3: 181078 00:06:17.576 done. 00:06:17.576 00:06:17.576 real 0m1.299s 00:06:17.576 user 0m4.118s 00:06:17.576 sys 0m0.053s 00:06:17.576 15:19:16 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.576 15:19:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.576 END TEST event_perf 00:06:17.576 ************************************ 00:06:17.576 15:19:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:17.576 15:19:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:17.576 15:19:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.576 15:19:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.576 START TEST event_reactor 00:06:17.576 ************************************ 00:06:17.576 15:19:16 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:17.576 [2024-10-01 15:19:16.689234] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:17.577 [2024-10-01 15:19:16.689319] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60128 ] 00:06:17.834 [2024-10-01 15:19:16.823556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.834 [2024-10-01 15:19:16.897796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.804 test_start 00:06:18.804 oneshot 00:06:18.804 tick 100 00:06:18.804 tick 100 00:06:18.804 tick 250 00:06:18.804 tick 100 00:06:18.804 tick 100 00:06:18.804 tick 250 00:06:18.804 tick 100 00:06:18.804 tick 500 00:06:18.804 tick 100 00:06:18.804 tick 100 00:06:18.804 tick 250 00:06:18.804 tick 100 00:06:18.804 tick 100 00:06:18.804 test_end 00:06:18.804 00:06:18.804 real 0m1.294s 00:06:18.804 user 0m1.149s 00:06:18.804 sys 0m0.038s 00:06:18.804 15:19:17 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.804 15:19:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:18.804 ************************************ 00:06:18.804 END TEST event_reactor 00:06:18.804 ************************************ 00:06:19.062 15:19:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.062 15:19:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:19.062 15:19:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.062 15:19:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.063 ************************************ 00:06:19.063 START TEST event_reactor_perf 00:06:19.063 ************************************ 00:06:19.063 15:19:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.063 [2024-10-01 15:19:18.030783] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:19.063 [2024-10-01 15:19:18.030905] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:06:19.063 [2024-10-01 15:19:18.175831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.321 [2024-10-01 15:19:18.246868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.254 test_start 00:06:20.254 test_end 00:06:20.254 Performance: 336814 events per second 00:06:20.254 00:06:20.254 real 0m1.307s 00:06:20.254 user 0m1.151s 00:06:20.254 sys 0m0.047s 00:06:20.254 15:19:19 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.254 15:19:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.254 ************************************ 00:06:20.254 END TEST event_reactor_perf 00:06:20.254 ************************************ 00:06:20.254 15:19:19 event -- event/event.sh@49 -- # uname -s 00:06:20.254 15:19:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.254 15:19:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:20.254 15:19:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.254 15:19:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.254 15:19:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.254 ************************************ 00:06:20.254 START TEST event_scheduler 00:06:20.254 ************************************ 00:06:20.254 15:19:19 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:20.513 * Looking for test storage... 00:06:20.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.513 15:19:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.513 --rc genhtml_branch_coverage=1 00:06:20.513 --rc genhtml_function_coverage=1 00:06:20.513 --rc genhtml_legend=1 00:06:20.513 --rc geninfo_all_blocks=1 00:06:20.513 --rc geninfo_unexecuted_blocks=1 00:06:20.513 00:06:20.513 ' 00:06:20.513 15:19:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.514 --rc genhtml_branch_coverage=1 00:06:20.514 --rc genhtml_function_coverage=1 00:06:20.514 --rc genhtml_legend=1 00:06:20.514 --rc geninfo_all_blocks=1 00:06:20.514 --rc geninfo_unexecuted_blocks=1 00:06:20.514 00:06:20.514 ' 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.514 --rc genhtml_branch_coverage=1 00:06:20.514 --rc genhtml_function_coverage=1 00:06:20.514 --rc genhtml_legend=1 00:06:20.514 --rc geninfo_all_blocks=1 00:06:20.514 --rc geninfo_unexecuted_blocks=1 00:06:20.514 00:06:20.514 ' 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.514 --rc genhtml_branch_coverage=1 00:06:20.514 --rc genhtml_function_coverage=1 00:06:20.514 --rc genhtml_legend=1 00:06:20.514 --rc geninfo_all_blocks=1 00:06:20.514 --rc geninfo_unexecuted_blocks=1 00:06:20.514 00:06:20.514 ' 00:06:20.514 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:20.514 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60232 00:06:20.514 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.514 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60232 00:06:20.514 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60232 ']' 00:06:20.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.514 15:19:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.514 [2024-10-01 15:19:19.619222] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:20.514 [2024-10-01 15:19:19.619334] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 00:06:20.773 [2024-10-01 15:19:19.755144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.773 [2024-10-01 15:19:19.833579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.773 [2024-10-01 15:19:19.833692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.773 [2024-10-01 15:19:19.834582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.773 [2024-10-01 15:19:19.834605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:20.773 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:20.773 POWER: Cannot set governor of lcore 0 to userspace 00:06:20.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:20.773 POWER: Cannot set governor of lcore 0 to performance 00:06:20.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:20.773 POWER: Cannot set governor of lcore 0 to userspace 00:06:20.773 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:20.773 POWER: Cannot set governor of lcore 0 to userspace 00:06:20.773 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:20.773 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:20.773 POWER: Unable to set Power Management Environment for lcore 0 00:06:20.773 [2024-10-01 15:19:19.907733] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:20.773 [2024-10-01 15:19:19.907758] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:20.773 [2024-10-01 15:19:19.907773] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:20.773 [2024-10-01 15:19:19.907787] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:20.773 [2024-10-01 15:19:19.907797] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:20.773 [2024-10-01 15:19:19.907806] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.773 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.773 15:19:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.058 [2024-10-01 15:19:19.972135] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.058 15:19:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.058 15:19:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.058 15:19:19 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.058 15:19:19 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.058 15:19:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.058 ************************************ 00:06:21.058 START TEST scheduler_create_thread 00:06:21.058 ************************************ 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.058 2 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.058 15:19:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 3 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 4 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 5 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 6 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 7 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 8 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 9 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 10 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.059 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.625 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.625 15:19:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:21.625 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.625 15:19:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.001 15:19:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.001 15:19:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.001 15:19:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.001 15:19:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.001 15:19:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.935 15:19:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.935 00:06:23.935 real 0m3.093s 00:06:23.935 user 0m0.019s 00:06:23.935 sys 0m0.006s 00:06:23.935 ************************************ 00:06:23.935 END TEST scheduler_create_thread 00:06:23.935 ************************************ 00:06:23.935 15:19:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.935 15:19:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.193 15:19:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.193 15:19:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60232 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60232 ']' 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60232 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60232 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.193 killing process with pid 60232 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60232' 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60232 00:06:24.193 15:19:23 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60232 00:06:24.451 [2024-10-01 15:19:23.456419] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.710 00:06:24.710 real 0m4.282s 00:06:24.710 user 0m6.767s 00:06:24.710 sys 0m0.293s 00:06:24.710 15:19:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.710 15:19:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 END TEST event_scheduler 00:06:24.710 ************************************ 00:06:24.710 15:19:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.710 15:19:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.710 15:19:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.710 15:19:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.710 15:19:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 START TEST app_repeat 00:06:24.710 ************************************ 00:06:24.710 15:19:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60337 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.710 Process app_repeat pid: 60337 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60337' 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.710 spdk_app_start Round 0 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:24.710 15:19:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60337 /var/tmp/spdk-nbd.sock 00:06:24.710 15:19:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60337 ']' 00:06:24.710 15:19:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.710 15:19:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.710 15:19:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.710 15:19:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.711 15:19:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.711 [2024-10-01 15:19:23.742972] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:24.711 [2024-10-01 15:19:23.743057] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60337 ] 00:06:24.711 [2024-10-01 15:19:23.876303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.969 [2024-10-01 15:19:23.936886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.969 [2024-10-01 15:19:23.936896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.969 15:19:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.969 15:19:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:24.969 15:19:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.228 Malloc0 00:06:25.228 15:19:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.487 Malloc1 00:06:25.746 15:19:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.746 15:19:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.004 /dev/nbd0 00:06:26.004 15:19:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.004 15:19:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.004 1+0 records in 00:06:26.004 1+0 records out 00:06:26.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025459 s, 16.1 MB/s 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.004 15:19:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.004 15:19:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.004 15:19:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.004 15:19:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.262 /dev/nbd1 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.262 1+0 records in 00:06:26.262 1+0 records out 00:06:26.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395315 s, 10.4 MB/s 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.262 15:19:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.262 15:19:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.828 15:19:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.828 { 00:06:26.828 "bdev_name": "Malloc0", 00:06:26.828 "nbd_device": "/dev/nbd0" 00:06:26.828 }, 00:06:26.828 { 00:06:26.829 "bdev_name": "Malloc1", 00:06:26.829 "nbd_device": "/dev/nbd1" 00:06:26.829 } 00:06:26.829 ]' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.829 { 00:06:26.829 "bdev_name": "Malloc0", 00:06:26.829 "nbd_device": "/dev/nbd0" 00:06:26.829 }, 00:06:26.829 { 00:06:26.829 "bdev_name": "Malloc1", 00:06:26.829 "nbd_device": "/dev/nbd1" 00:06:26.829 } 00:06:26.829 ]' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.829 /dev/nbd1' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.829 /dev/nbd1' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.829 256+0 records in 00:06:26.829 256+0 records out 00:06:26.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079781 s, 131 MB/s 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.829 256+0 records in 00:06:26.829 256+0 records out 00:06:26.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263654 s, 39.8 MB/s 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.829 256+0 records in 00:06:26.829 256+0 records out 00:06:26.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278639 s, 37.6 MB/s 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.829 15:19:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.088 15:19:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.655 15:19:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.914 15:19:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.914 15:19:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.914 15:19:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.914 15:19:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.914 15:19:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.480 15:19:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.480 [2024-10-01 15:19:27.495438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.480 [2024-10-01 15:19:27.554584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.480 [2024-10-01 15:19:27.554595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.480 [2024-10-01 15:19:27.584547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.480 [2024-10-01 15:19:27.584603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.805 spdk_app_start Round 1 00:06:31.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.805 15:19:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.805 15:19:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.805 15:19:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60337 /var/tmp/spdk-nbd.sock 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60337 ']' 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.805 15:19:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.805 15:19:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.084 Malloc0 00:06:32.084 15:19:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.342 Malloc1 00:06:32.342 15:19:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.342 15:19:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.343 15:19:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.343 15:19:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.343 15:19:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.343 15:19:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.343 15:19:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.601 /dev/nbd0 00:06:32.601 15:19:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.601 15:19:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.601 1+0 records in 00:06:32.601 1+0 records out 00:06:32.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321687 s, 12.7 MB/s 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.601 15:19:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:32.601 15:19:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.601 15:19:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.601 15:19:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.168 /dev/nbd1 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.168 1+0 records in 00:06:33.168 1+0 records out 00:06:33.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262128 s, 15.6 MB/s 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.168 15:19:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.168 15:19:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.426 15:19:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.426 { 00:06:33.426 "bdev_name": "Malloc0", 00:06:33.426 "nbd_device": "/dev/nbd0" 00:06:33.426 }, 00:06:33.426 { 00:06:33.426 "bdev_name": "Malloc1", 00:06:33.426 "nbd_device": "/dev/nbd1" 00:06:33.426 } 00:06:33.426 ]' 00:06:33.426 15:19:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.426 { 00:06:33.426 "bdev_name": "Malloc0", 00:06:33.426 "nbd_device": "/dev/nbd0" 00:06:33.426 }, 00:06:33.426 { 00:06:33.426 "bdev_name": "Malloc1", 00:06:33.426 "nbd_device": "/dev/nbd1" 00:06:33.426 } 00:06:33.426 ]' 00:06:33.426 15:19:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.426 15:19:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.426 /dev/nbd1' 00:06:33.426 15:19:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.427 /dev/nbd1' 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.427 256+0 records in 00:06:33.427 256+0 records out 00:06:33.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728241 s, 144 MB/s 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.427 256+0 records in 00:06:33.427 256+0 records out 00:06:33.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023041 s, 45.5 MB/s 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.427 256+0 records in 00:06:33.427 256+0 records out 00:06:33.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303109 s, 34.6 MB/s 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.427 15:19:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.993 15:19:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.251 15:19:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.511 15:19:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.511 15:19:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.770 15:19:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.030 [2024-10-01 15:19:33.951380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.030 [2024-10-01 15:19:34.011267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.030 [2024-10-01 15:19:34.011276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.030 [2024-10-01 15:19:34.041881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.030 [2024-10-01 15:19:34.041929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.314 spdk_app_start Round 2 00:06:38.314 15:19:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.314 15:19:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:38.314 15:19:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60337 /var/tmp/spdk-nbd.sock 00:06:38.314 15:19:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60337 ']' 00:06:38.314 15:19:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.314 15:19:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.314 15:19:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.314 15:19:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.314 15:19:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.314 15:19:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.314 15:19:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:38.314 15:19:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.573 Malloc0 00:06:38.573 15:19:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.834 Malloc1 00:06:38.834 15:19:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.834 15:19:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.092 /dev/nbd0 00:06:39.092 15:19:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.092 15:19:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.092 1+0 records in 00:06:39.092 1+0 records out 00:06:39.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280333 s, 14.6 MB/s 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.092 15:19:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.092 15:19:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.092 15:19:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.092 15:19:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.350 /dev/nbd1 00:06:39.350 15:19:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.350 15:19:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.350 1+0 records in 00:06:39.350 1+0 records out 00:06:39.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254522 s, 16.1 MB/s 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.350 15:19:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.350 15:19:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.350 15:19:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.350 15:19:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.350 15:19:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.609 15:19:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.867 { 00:06:39.867 "bdev_name": "Malloc0", 00:06:39.867 "nbd_device": "/dev/nbd0" 00:06:39.867 }, 00:06:39.867 { 00:06:39.867 "bdev_name": "Malloc1", 00:06:39.867 "nbd_device": "/dev/nbd1" 00:06:39.867 } 00:06:39.867 ]' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.867 { 00:06:39.867 "bdev_name": "Malloc0", 00:06:39.867 "nbd_device": "/dev/nbd0" 00:06:39.867 }, 00:06:39.867 { 00:06:39.867 "bdev_name": "Malloc1", 00:06:39.867 "nbd_device": "/dev/nbd1" 00:06:39.867 } 00:06:39.867 ]' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.867 /dev/nbd1' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.867 /dev/nbd1' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.867 256+0 records in 00:06:39.867 256+0 records out 00:06:39.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074095 s, 142 MB/s 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.867 256+0 records in 00:06:39.867 256+0 records out 00:06:39.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252651 s, 41.5 MB/s 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.867 256+0 records in 00:06:39.867 256+0 records out 00:06:39.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233573 s, 44.9 MB/s 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.867 15:19:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.126 15:19:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.384 15:19:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.384 15:19:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.384 15:19:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.384 15:19:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.384 15:19:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.384 15:19:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.643 15:19:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.643 15:19:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.643 15:19:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.643 15:19:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.643 15:19:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.901 15:19:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.901 15:19:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.160 15:19:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.429 [2024-10-01 15:19:40.406259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.429 [2024-10-01 15:19:40.463982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.429 [2024-10-01 15:19:40.463995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.429 [2024-10-01 15:19:40.492991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.429 [2024-10-01 15:19:40.493046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.756 15:19:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60337 /var/tmp/spdk-nbd.sock 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60337 ']' 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:44.756 15:19:43 event.app_repeat -- event/event.sh@39 -- # killprocess 60337 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60337 ']' 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60337 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60337 00:06:44.756 killing process with pid 60337 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60337' 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60337 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60337 00:06:44.756 spdk_app_start is called in Round 0. 00:06:44.756 Shutdown signal received, stop current app iteration 00:06:44.756 Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 reinitialization... 00:06:44.756 spdk_app_start is called in Round 1. 00:06:44.756 Shutdown signal received, stop current app iteration 00:06:44.756 Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 reinitialization... 00:06:44.756 spdk_app_start is called in Round 2. 00:06:44.756 Shutdown signal received, stop current app iteration 00:06:44.756 Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 reinitialization... 00:06:44.756 spdk_app_start is called in Round 3. 00:06:44.756 Shutdown signal received, stop current app iteration 00:06:44.756 15:19:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:44.756 15:19:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:44.756 00:06:44.756 real 0m20.077s 00:06:44.756 user 0m46.544s 00:06:44.756 sys 0m2.913s 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.756 15:19:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.756 ************************************ 00:06:44.756 END TEST app_repeat 00:06:44.756 ************************************ 00:06:44.756 15:19:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:44.756 15:19:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:44.756 15:19:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.756 15:19:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.756 15:19:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.756 ************************************ 00:06:44.756 START TEST cpu_locks 00:06:44.756 ************************************ 00:06:44.756 15:19:43 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:45.037 * Looking for test storage... 00:06:45.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.037 15:19:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.037 --rc genhtml_branch_coverage=1 00:06:45.037 --rc genhtml_function_coverage=1 00:06:45.037 --rc genhtml_legend=1 00:06:45.037 --rc geninfo_all_blocks=1 00:06:45.037 --rc geninfo_unexecuted_blocks=1 00:06:45.037 00:06:45.037 ' 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.037 --rc genhtml_branch_coverage=1 00:06:45.037 --rc genhtml_function_coverage=1 00:06:45.037 --rc genhtml_legend=1 00:06:45.037 --rc geninfo_all_blocks=1 00:06:45.037 --rc geninfo_unexecuted_blocks=1 00:06:45.037 00:06:45.037 ' 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.037 --rc genhtml_branch_coverage=1 00:06:45.037 --rc genhtml_function_coverage=1 00:06:45.037 --rc genhtml_legend=1 00:06:45.037 --rc geninfo_all_blocks=1 00:06:45.037 --rc geninfo_unexecuted_blocks=1 00:06:45.037 00:06:45.037 ' 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.037 --rc genhtml_branch_coverage=1 00:06:45.037 --rc genhtml_function_coverage=1 00:06:45.037 --rc genhtml_legend=1 00:06:45.037 --rc geninfo_all_blocks=1 00:06:45.037 --rc geninfo_unexecuted_blocks=1 00:06:45.037 00:06:45.037 ' 00:06:45.037 15:19:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:45.037 15:19:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:45.037 15:19:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:45.037 15:19:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.037 15:19:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.037 ************************************ 00:06:45.037 START TEST default_locks 00:06:45.037 ************************************ 00:06:45.037 15:19:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:45.037 15:19:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60969 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60969 00:06:45.037 15:19:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60969 ']' 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.037 15:19:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.037 [2024-10-01 15:19:44.088928] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:45.037 [2024-10-01 15:19:44.089056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60969 ] 00:06:45.296 [2024-10-01 15:19:44.227575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.296 [2024-10-01 15:19:44.302926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.232 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.232 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:46.232 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60969 00:06:46.232 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.232 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60969 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60969 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60969 ']' 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60969 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60969 00:06:46.490 killing process with pid 60969 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60969' 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60969 00:06:46.490 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60969 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60969 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60969 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60969 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60969 ']' 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.749 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60969) - No such process 00:06:46.749 ERROR: process (pid: 60969) is no longer running 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.749 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.749 ************************************ 00:06:46.750 END TEST default_locks 00:06:46.750 ************************************ 00:06:46.750 15:19:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.750 00:06:46.750 real 0m1.903s 00:06:46.750 user 0m2.229s 00:06:46.750 sys 0m0.497s 00:06:46.750 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.750 15:19:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.008 15:19:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:47.008 15:19:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.008 15:19:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.008 15:19:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.008 ************************************ 00:06:47.008 START TEST default_locks_via_rpc 00:06:47.008 ************************************ 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61033 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61033 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61033 ']' 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.008 15:19:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.008 [2024-10-01 15:19:46.022703] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:47.008 [2024-10-01 15:19:46.022824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61033 ] 00:06:47.008 [2024-10-01 15:19:46.162050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.266 [2024-10-01 15:19:46.226110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61033 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61033 00:06:47.266 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61033 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 61033 ']' 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 61033 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61033 00:06:47.832 killing process with pid 61033 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61033' 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 61033 00:06:47.832 15:19:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 61033 00:06:48.091 00:06:48.091 real 0m1.271s 00:06:48.091 user 0m1.387s 00:06:48.091 sys 0m0.471s 00:06:48.091 15:19:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.091 15:19:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.091 ************************************ 00:06:48.091 END TEST default_locks_via_rpc 00:06:48.091 ************************************ 00:06:48.091 15:19:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:48.091 15:19:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.091 15:19:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.091 15:19:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.350 ************************************ 00:06:48.350 START TEST non_locking_app_on_locked_coremask 00:06:48.350 ************************************ 00:06:48.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61089 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61089 /var/tmp/spdk.sock 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61089 ']' 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.350 15:19:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.350 [2024-10-01 15:19:47.355557] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:48.350 [2024-10-01 15:19:47.355955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61089 ] 00:06:48.350 [2024-10-01 15:19:47.498210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.609 [2024-10-01 15:19:47.556923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61117 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61117 /var/tmp/spdk2.sock 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61117 ']' 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.176 15:19:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.434 [2024-10-01 15:19:48.392415] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:49.434 [2024-10-01 15:19:48.392715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:06:49.434 [2024-10-01 15:19:48.535236] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.434 [2024-10-01 15:19:48.535296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.692 [2024-10-01 15:19:48.655893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.261 15:19:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.261 15:19:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.261 15:19:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61089 00:06:50.261 15:19:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61089 00:06:50.261 15:19:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61089 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61089 ']' 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61089 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61089 00:06:51.632 killing process with pid 61089 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61089' 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61089 00:06:51.632 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61089 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61117 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61117 ']' 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61117 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61117 00:06:51.890 killing process with pid 61117 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61117' 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61117 00:06:51.890 15:19:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61117 00:06:52.147 ************************************ 00:06:52.147 END TEST non_locking_app_on_locked_coremask 00:06:52.147 ************************************ 00:06:52.147 00:06:52.147 real 0m3.979s 00:06:52.147 user 0m4.769s 00:06:52.147 sys 0m1.002s 00:06:52.147 15:19:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.147 15:19:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.147 15:19:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.147 15:19:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.147 15:19:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.147 15:19:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.147 ************************************ 00:06:52.147 START TEST locking_app_on_unlocked_coremask 00:06:52.147 ************************************ 00:06:52.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.147 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61196 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61196 /var/tmp/spdk.sock 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61196 ']' 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.148 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.406 [2024-10-01 15:19:51.361256] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:52.406 [2024-10-01 15:19:51.361614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:06:52.406 [2024-10-01 15:19:51.499982] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.406 [2024-10-01 15:19:51.500228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.406 [2024-10-01 15:19:51.565030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61206 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61206 /var/tmp/spdk2.sock 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61206 ']' 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.663 15:19:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.663 [2024-10-01 15:19:51.808317] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:52.663 [2024-10-01 15:19:51.808458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61206 ] 00:06:52.922 [2024-10-01 15:19:51.955063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.922 [2024-10-01 15:19:52.083932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.867 15:19:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.867 15:19:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.867 15:19:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61206 00:06:53.867 15:19:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61206 00:06:53.867 15:19:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.800 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61196 00:06:54.800 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61196 ']' 00:06:54.800 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61196 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61196 00:06:54.801 killing process with pid 61196 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61196' 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61196 00:06:54.801 15:19:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61196 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61206 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61206 ']' 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61206 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61206 00:06:55.367 killing process with pid 61206 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61206' 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61206 00:06:55.367 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61206 00:06:55.625 00:06:55.625 real 0m3.466s 00:06:55.625 user 0m4.149s 00:06:55.625 sys 0m0.951s 00:06:55.625 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.625 15:19:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.625 ************************************ 00:06:55.625 END TEST locking_app_on_unlocked_coremask 00:06:55.625 ************************************ 00:06:55.625 15:19:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.625 15:19:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.625 15:19:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.625 15:19:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.882 ************************************ 00:06:55.882 START TEST locking_app_on_locked_coremask 00:06:55.882 ************************************ 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:55.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61284 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61284 /var/tmp/spdk.sock 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61284 ']' 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.882 15:19:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.882 [2024-10-01 15:19:54.870784] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:55.882 [2024-10-01 15:19:54.870887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:06:55.882 [2024-10-01 15:19:55.023121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.140 [2024-10-01 15:19:55.106865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61312 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61312 /var/tmp/spdk2.sock 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61312 /var/tmp/spdk2.sock 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.705 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:56.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61312 /var/tmp/spdk2.sock 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61312 ']' 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.963 15:19:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.963 [2024-10-01 15:19:55.953859] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:56.963 [2024-10-01 15:19:55.953974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:06:56.963 [2024-10-01 15:19:56.103890] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61284 has claimed it. 00:06:56.963 [2024-10-01 15:19:56.103968] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.529 ERROR: process (pid: 61312) is no longer running 00:06:57.529 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61312) - No such process 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61284 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.529 15:19:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61284 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61284 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61284 ']' 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61284 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61284 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61284' 00:06:58.096 killing process with pid 61284 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61284 00:06:58.096 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61284 00:06:58.354 00:06:58.354 real 0m2.654s 00:06:58.354 user 0m3.263s 00:06:58.354 sys 0m0.581s 00:06:58.354 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.354 15:19:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.354 ************************************ 00:06:58.354 END TEST locking_app_on_locked_coremask 00:06:58.354 ************************************ 00:06:58.354 15:19:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:58.354 15:19:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.354 15:19:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.354 15:19:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.354 ************************************ 00:06:58.354 START TEST locking_overlapped_coremask 00:06:58.354 ************************************ 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61369 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:58.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61369 /var/tmp/spdk.sock 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61369 ']' 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.354 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.613 [2024-10-01 15:19:57.572599] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:58.613 [2024-10-01 15:19:57.572706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61369 ] 00:06:58.613 [2024-10-01 15:19:57.711092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.613 [2024-10-01 15:19:57.772986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.613 [2024-10-01 15:19:57.773109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.613 [2024-10-01 15:19:57.773332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61380 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61380 /var/tmp/spdk2.sock 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61380 /var/tmp/spdk2.sock 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61380 /var/tmp/spdk2.sock 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61380 ']' 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.871 15:19:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.131 [2024-10-01 15:19:58.042142] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:06:59.131 [2024-10-01 15:19:58.042960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61380 ] 00:06:59.131 [2024-10-01 15:19:58.192066] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61369 has claimed it. 00:06:59.131 [2024-10-01 15:19:58.192139] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.698 ERROR: process (pid: 61380) is no longer running 00:06:59.698 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61380) - No such process 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61369 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61369 ']' 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61369 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61369 00:06:59.698 killing process with pid 61369 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61369' 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61369 00:06:59.698 15:19:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61369 00:06:59.957 00:06:59.957 real 0m1.541s 00:06:59.957 user 0m4.228s 00:06:59.957 sys 0m0.304s 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.957 ************************************ 00:06:59.957 END TEST locking_overlapped_coremask 00:06:59.957 ************************************ 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.957 15:19:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.957 15:19:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.957 15:19:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.957 15:19:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.957 ************************************ 00:06:59.957 START TEST locking_overlapped_coremask_via_rpc 00:06:59.957 ************************************ 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61430 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61430 /var/tmp/spdk.sock 00:06:59.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61430 ']' 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.957 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.216 [2024-10-01 15:19:59.151747] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:00.216 [2024-10-01 15:19:59.151838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61430 ] 00:07:00.216 [2024-10-01 15:19:59.285374] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.216 [2024-10-01 15:19:59.285431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.216 [2024-10-01 15:19:59.346209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.216 [2024-10-01 15:19:59.346302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.216 [2024-10-01 15:19:59.346308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61448 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61448 /var/tmp/spdk2.sock 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61448 ']' 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.474 15:19:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.474 [2024-10-01 15:19:59.595239] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:00.474 [2024-10-01 15:19:59.595352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61448 ] 00:07:00.731 [2024-10-01 15:19:59.744736] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.731 [2024-10-01 15:19:59.744786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.731 [2024-10-01 15:19:59.865852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.731 [2024-10-01 15:19:59.865925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.731 [2024-10-01 15:19:59.865924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.664 [2024-10-01 15:20:00.658583] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61430 has claimed it. 00:07:01.664 2024/10/01 15:20:00 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:01.664 request: 00:07:01.664 { 00:07:01.664 "method": "framework_enable_cpumask_locks", 00:07:01.664 "params": {} 00:07:01.664 } 00:07:01.664 Got JSON-RPC error response 00:07:01.664 GoRPCClient: error on JSON-RPC call 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61430 /var/tmp/spdk.sock 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61430 ']' 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.664 15:20:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61448 /var/tmp/spdk2.sock 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61448 ']' 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.922 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.487 00:07:02.487 real 0m2.317s 00:07:02.487 user 0m1.460s 00:07:02.487 sys 0m0.184s 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.487 15:20:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.487 ************************************ 00:07:02.487 END TEST locking_overlapped_coremask_via_rpc 00:07:02.487 ************************************ 00:07:02.487 15:20:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.487 15:20:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61430 ]] 00:07:02.487 15:20:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61430 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61430 ']' 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61430 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61430 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.487 killing process with pid 61430 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61430' 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61430 00:07:02.487 15:20:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61430 00:07:02.745 15:20:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61448 ]] 00:07:02.745 15:20:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61448 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61448 ']' 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61448 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61448 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:02.745 killing process with pid 61448 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61448' 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61448 00:07:02.745 15:20:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61448 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61430 ]] 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61430 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61430 ']' 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61430 00:07:03.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61430) - No such process 00:07:03.003 Process with pid 61430 is not found 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61430 is not found' 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61448 ]] 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61448 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61448 ']' 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61448 00:07:03.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61448) - No such process 00:07:03.003 Process with pid 61448 is not found 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61448 is not found' 00:07:03.003 15:20:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.003 00:07:03.003 real 0m18.239s 00:07:03.003 user 0m33.146s 00:07:03.003 sys 0m4.643s 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.003 15:20:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.003 ************************************ 00:07:03.003 END TEST cpu_locks 00:07:03.003 ************************************ 00:07:03.003 00:07:03.003 real 0m46.963s 00:07:03.003 user 1m33.073s 00:07:03.003 sys 0m8.240s 00:07:03.003 15:20:02 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.003 15:20:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.003 ************************************ 00:07:03.003 END TEST event 00:07:03.003 ************************************ 00:07:03.003 15:20:02 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:03.003 15:20:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.003 15:20:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.003 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:03.003 ************************************ 00:07:03.003 START TEST thread 00:07:03.003 ************************************ 00:07:03.003 15:20:02 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:03.262 * Looking for test storage... 00:07:03.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.262 15:20:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.262 15:20:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.262 15:20:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.262 15:20:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.262 15:20:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.262 15:20:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.262 15:20:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.262 15:20:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.262 15:20:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.262 15:20:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.262 15:20:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.262 15:20:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:03.262 15:20:02 thread -- scripts/common.sh@345 -- # : 1 00:07:03.262 15:20:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.262 15:20:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.262 15:20:02 thread -- scripts/common.sh@365 -- # decimal 1 00:07:03.262 15:20:02 thread -- scripts/common.sh@353 -- # local d=1 00:07:03.262 15:20:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.262 15:20:02 thread -- scripts/common.sh@355 -- # echo 1 00:07:03.262 15:20:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.262 15:20:02 thread -- scripts/common.sh@366 -- # decimal 2 00:07:03.262 15:20:02 thread -- scripts/common.sh@353 -- # local d=2 00:07:03.262 15:20:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.262 15:20:02 thread -- scripts/common.sh@355 -- # echo 2 00:07:03.262 15:20:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.262 15:20:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.262 15:20:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.262 15:20:02 thread -- scripts/common.sh@368 -- # return 0 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.262 --rc genhtml_branch_coverage=1 00:07:03.262 --rc genhtml_function_coverage=1 00:07:03.262 --rc genhtml_legend=1 00:07:03.262 --rc geninfo_all_blocks=1 00:07:03.262 --rc geninfo_unexecuted_blocks=1 00:07:03.262 00:07:03.262 ' 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.262 --rc genhtml_branch_coverage=1 00:07:03.262 --rc genhtml_function_coverage=1 00:07:03.262 --rc genhtml_legend=1 00:07:03.262 --rc geninfo_all_blocks=1 00:07:03.262 --rc geninfo_unexecuted_blocks=1 00:07:03.262 00:07:03.262 ' 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.262 --rc genhtml_branch_coverage=1 00:07:03.262 --rc genhtml_function_coverage=1 00:07:03.262 --rc genhtml_legend=1 00:07:03.262 --rc geninfo_all_blocks=1 00:07:03.262 --rc geninfo_unexecuted_blocks=1 00:07:03.262 00:07:03.262 ' 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.262 --rc genhtml_branch_coverage=1 00:07:03.262 --rc genhtml_function_coverage=1 00:07:03.262 --rc genhtml_legend=1 00:07:03.262 --rc geninfo_all_blocks=1 00:07:03.262 --rc geninfo_unexecuted_blocks=1 00:07:03.262 00:07:03.262 ' 00:07:03.262 15:20:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.262 15:20:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.262 ************************************ 00:07:03.262 START TEST thread_poller_perf 00:07:03.262 ************************************ 00:07:03.262 15:20:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.262 [2024-10-01 15:20:02.384879] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:03.262 [2024-10-01 15:20:02.384980] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61607 ] 00:07:03.520 [2024-10-01 15:20:02.520842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.520 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:03.520 [2024-10-01 15:20:02.592014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.895 ====================================== 00:07:04.895 busy:2209421973 (cyc) 00:07:04.895 total_run_count: 283000 00:07:04.895 tsc_hz: 2200000000 (cyc) 00:07:04.895 ====================================== 00:07:04.895 poller_cost: 7807 (cyc), 3548 (nsec) 00:07:04.895 00:07:04.895 real 0m1.307s 00:07:04.895 user 0m1.151s 00:07:04.895 sys 0m0.050s 00:07:04.895 15:20:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.895 ************************************ 00:07:04.895 END TEST thread_poller_perf 00:07:04.895 ************************************ 00:07:04.895 15:20:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 15:20:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.895 15:20:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:04.895 15:20:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.895 15:20:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 ************************************ 00:07:04.895 START TEST thread_poller_perf 00:07:04.895 ************************************ 00:07:04.895 15:20:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.895 [2024-10-01 15:20:03.738014] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:04.895 [2024-10-01 15:20:03.738131] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61638 ] 00:07:04.895 [2024-10-01 15:20:03.874805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.895 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:04.895 [2024-10-01 15:20:03.946274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.269 ====================================== 00:07:06.269 busy:2202511059 (cyc) 00:07:06.269 total_run_count: 3827000 00:07:06.269 tsc_hz: 2200000000 (cyc) 00:07:06.269 ====================================== 00:07:06.269 poller_cost: 575 (cyc), 261 (nsec) 00:07:06.269 00:07:06.269 real 0m1.302s 00:07:06.269 user 0m1.152s 00:07:06.269 sys 0m0.043s 00:07:06.269 15:20:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.269 15:20:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.269 ************************************ 00:07:06.269 END TEST thread_poller_perf 00:07:06.269 ************************************ 00:07:06.269 15:20:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.269 00:07:06.269 real 0m2.897s 00:07:06.269 user 0m2.449s 00:07:06.269 sys 0m0.221s 00:07:06.269 15:20:05 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.269 15:20:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.269 ************************************ 00:07:06.269 END TEST thread 00:07:06.269 ************************************ 00:07:06.269 15:20:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:06.269 15:20:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:06.269 15:20:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.269 15:20:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.269 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:06.269 ************************************ 00:07:06.269 START TEST app_cmdline 00:07:06.269 ************************************ 00:07:06.269 15:20:05 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:06.269 * Looking for test storage... 00:07:06.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:06.269 15:20:05 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.269 15:20:05 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.269 15:20:05 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.269 15:20:05 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:06.269 15:20:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.270 15:20:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.270 15:20:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.270 15:20:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.270 15:20:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.270 15:20:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.270 --rc genhtml_branch_coverage=1 00:07:06.270 --rc genhtml_function_coverage=1 00:07:06.270 --rc genhtml_legend=1 00:07:06.270 --rc geninfo_all_blocks=1 00:07:06.270 --rc geninfo_unexecuted_blocks=1 00:07:06.270 00:07:06.270 ' 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.270 --rc genhtml_branch_coverage=1 00:07:06.270 --rc genhtml_function_coverage=1 00:07:06.270 --rc genhtml_legend=1 00:07:06.270 --rc geninfo_all_blocks=1 00:07:06.270 --rc geninfo_unexecuted_blocks=1 00:07:06.270 00:07:06.270 ' 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.270 --rc genhtml_branch_coverage=1 00:07:06.270 --rc genhtml_function_coverage=1 00:07:06.270 --rc genhtml_legend=1 00:07:06.270 --rc geninfo_all_blocks=1 00:07:06.270 --rc geninfo_unexecuted_blocks=1 00:07:06.270 00:07:06.270 ' 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.270 --rc genhtml_branch_coverage=1 00:07:06.270 --rc genhtml_function_coverage=1 00:07:06.270 --rc genhtml_legend=1 00:07:06.270 --rc geninfo_all_blocks=1 00:07:06.270 --rc geninfo_unexecuted_blocks=1 00:07:06.270 00:07:06.270 ' 00:07:06.270 15:20:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:06.270 15:20:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61717 00:07:06.270 15:20:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:06.270 15:20:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61717 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61717 ']' 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.270 15:20:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.270 [2024-10-01 15:20:05.328698] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:06.270 [2024-10-01 15:20:05.328976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:07:06.528 [2024-10-01 15:20:05.461872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.528 [2024-10-01 15:20:05.530523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.786 15:20:05 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.786 15:20:05 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:06.786 15:20:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:07.045 { 00:07:07.045 "fields": { 00:07:07.045 "commit": "f15f2a1dd", 00:07:07.045 "major": 25, 00:07:07.045 "minor": 1, 00:07:07.045 "patch": 0, 00:07:07.045 "suffix": "-pre" 00:07:07.045 }, 00:07:07.045 "version": "SPDK v25.01-pre git sha1 f15f2a1dd" 00:07:07.045 } 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:07.045 15:20:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:07.045 15:20:06 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.304 2024/10/01 15:20:06 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:07.304 request: 00:07:07.304 { 00:07:07.304 "method": "env_dpdk_get_mem_stats", 00:07:07.304 "params": {} 00:07:07.304 } 00:07:07.304 Got JSON-RPC error response 00:07:07.304 GoRPCClient: error on JSON-RPC call 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.304 15:20:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61717 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61717 ']' 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61717 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61717 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.304 killing process with pid 61717 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61717' 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@969 -- # kill 61717 00:07:07.304 15:20:06 app_cmdline -- common/autotest_common.sh@974 -- # wait 61717 00:07:07.563 00:07:07.563 real 0m1.585s 00:07:07.563 user 0m2.164s 00:07:07.563 sys 0m0.377s 00:07:07.563 15:20:06 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.563 15:20:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.563 ************************************ 00:07:07.563 END TEST app_cmdline 00:07:07.563 ************************************ 00:07:07.563 15:20:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:07.563 15:20:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.563 15:20:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.563 15:20:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.563 ************************************ 00:07:07.563 START TEST version 00:07:07.563 ************************************ 00:07:07.563 15:20:06 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:07.821 * Looking for test storage... 00:07:07.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:07.821 15:20:06 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.821 15:20:06 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.821 15:20:06 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.821 15:20:06 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.821 15:20:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.821 15:20:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.821 15:20:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.821 15:20:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.821 15:20:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.821 15:20:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.821 15:20:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.821 15:20:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.821 15:20:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.821 15:20:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.821 15:20:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.821 15:20:06 version -- scripts/common.sh@344 -- # case "$op" in 00:07:07.821 15:20:06 version -- scripts/common.sh@345 -- # : 1 00:07:07.821 15:20:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.821 15:20:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.822 15:20:06 version -- scripts/common.sh@365 -- # decimal 1 00:07:07.822 15:20:06 version -- scripts/common.sh@353 -- # local d=1 00:07:07.822 15:20:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.822 15:20:06 version -- scripts/common.sh@355 -- # echo 1 00:07:07.822 15:20:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.822 15:20:06 version -- scripts/common.sh@366 -- # decimal 2 00:07:07.822 15:20:06 version -- scripts/common.sh@353 -- # local d=2 00:07:07.822 15:20:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.822 15:20:06 version -- scripts/common.sh@355 -- # echo 2 00:07:07.822 15:20:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.822 15:20:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.822 15:20:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.822 15:20:06 version -- scripts/common.sh@368 -- # return 0 00:07:07.822 15:20:06 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.822 15:20:06 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.822 --rc genhtml_branch_coverage=1 00:07:07.822 --rc genhtml_function_coverage=1 00:07:07.822 --rc genhtml_legend=1 00:07:07.822 --rc geninfo_all_blocks=1 00:07:07.822 --rc geninfo_unexecuted_blocks=1 00:07:07.822 00:07:07.822 ' 00:07:07.822 15:20:06 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.822 --rc genhtml_branch_coverage=1 00:07:07.822 --rc genhtml_function_coverage=1 00:07:07.822 --rc genhtml_legend=1 00:07:07.822 --rc geninfo_all_blocks=1 00:07:07.822 --rc geninfo_unexecuted_blocks=1 00:07:07.822 00:07:07.822 ' 00:07:07.822 15:20:06 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.822 --rc genhtml_branch_coverage=1 00:07:07.822 --rc genhtml_function_coverage=1 00:07:07.822 --rc genhtml_legend=1 00:07:07.822 --rc geninfo_all_blocks=1 00:07:07.822 --rc geninfo_unexecuted_blocks=1 00:07:07.822 00:07:07.822 ' 00:07:07.822 15:20:06 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.822 --rc genhtml_branch_coverage=1 00:07:07.822 --rc genhtml_function_coverage=1 00:07:07.822 --rc genhtml_legend=1 00:07:07.822 --rc geninfo_all_blocks=1 00:07:07.822 --rc geninfo_unexecuted_blocks=1 00:07:07.822 00:07:07.822 ' 00:07:07.822 15:20:06 version -- app/version.sh@17 -- # get_header_version major 00:07:07.822 15:20:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # cut -f2 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.822 15:20:06 version -- app/version.sh@17 -- # major=25 00:07:07.822 15:20:06 version -- app/version.sh@18 -- # get_header_version minor 00:07:07.822 15:20:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # cut -f2 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.822 15:20:06 version -- app/version.sh@18 -- # minor=1 00:07:07.822 15:20:06 version -- app/version.sh@19 -- # get_header_version patch 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # cut -f2 00:07:07.822 15:20:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.822 15:20:06 version -- app/version.sh@19 -- # patch=0 00:07:07.822 15:20:06 version -- app/version.sh@20 -- # get_header_version suffix 00:07:07.822 15:20:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # cut -f2 00:07:07.822 15:20:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.822 15:20:06 version -- app/version.sh@20 -- # suffix=-pre 00:07:07.822 15:20:06 version -- app/version.sh@22 -- # version=25.1 00:07:07.822 15:20:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.822 15:20:06 version -- app/version.sh@28 -- # version=25.1rc0 00:07:07.822 15:20:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:07.822 15:20:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.822 15:20:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:07.822 15:20:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:07.822 00:07:07.822 real 0m0.259s 00:07:07.822 user 0m0.186s 00:07:07.822 sys 0m0.108s 00:07:07.822 15:20:06 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.822 ************************************ 00:07:07.822 END TEST version 00:07:07.822 ************************************ 00:07:07.822 15:20:06 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 15:20:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:08.080 15:20:07 -- spdk/autotest.sh@194 -- # uname -s 00:07:08.080 15:20:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:08.080 15:20:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.080 15:20:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.080 15:20:07 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:08.080 15:20:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.080 15:20:07 -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 15:20:07 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:08.080 15:20:07 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:08.080 15:20:07 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.081 15:20:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.081 15:20:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.081 15:20:07 -- common/autotest_common.sh@10 -- # set +x 00:07:08.081 ************************************ 00:07:08.081 START TEST nvmf_tcp 00:07:08.081 ************************************ 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.081 * Looking for test storage... 00:07:08.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.081 15:20:07 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.081 --rc genhtml_branch_coverage=1 00:07:08.081 --rc genhtml_function_coverage=1 00:07:08.081 --rc genhtml_legend=1 00:07:08.081 --rc geninfo_all_blocks=1 00:07:08.081 --rc geninfo_unexecuted_blocks=1 00:07:08.081 00:07:08.081 ' 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.081 --rc genhtml_branch_coverage=1 00:07:08.081 --rc genhtml_function_coverage=1 00:07:08.081 --rc genhtml_legend=1 00:07:08.081 --rc geninfo_all_blocks=1 00:07:08.081 --rc geninfo_unexecuted_blocks=1 00:07:08.081 00:07:08.081 ' 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.081 --rc genhtml_branch_coverage=1 00:07:08.081 --rc genhtml_function_coverage=1 00:07:08.081 --rc genhtml_legend=1 00:07:08.081 --rc geninfo_all_blocks=1 00:07:08.081 --rc geninfo_unexecuted_blocks=1 00:07:08.081 00:07:08.081 ' 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.081 --rc genhtml_branch_coverage=1 00:07:08.081 --rc genhtml_function_coverage=1 00:07:08.081 --rc genhtml_legend=1 00:07:08.081 --rc geninfo_all_blocks=1 00:07:08.081 --rc geninfo_unexecuted_blocks=1 00:07:08.081 00:07:08.081 ' 00:07:08.081 15:20:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.081 15:20:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.081 15:20:07 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.081 15:20:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.081 ************************************ 00:07:08.081 START TEST nvmf_target_core 00:07:08.081 ************************************ 00:07:08.081 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.340 * Looking for test storage... 00:07:08.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:08.340 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.341 --rc genhtml_branch_coverage=1 00:07:08.341 --rc genhtml_function_coverage=1 00:07:08.341 --rc genhtml_legend=1 00:07:08.341 --rc geninfo_all_blocks=1 00:07:08.341 --rc geninfo_unexecuted_blocks=1 00:07:08.341 00:07:08.341 ' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.341 --rc genhtml_branch_coverage=1 00:07:08.341 --rc genhtml_function_coverage=1 00:07:08.341 --rc genhtml_legend=1 00:07:08.341 --rc geninfo_all_blocks=1 00:07:08.341 --rc geninfo_unexecuted_blocks=1 00:07:08.341 00:07:08.341 ' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.341 --rc genhtml_branch_coverage=1 00:07:08.341 --rc genhtml_function_coverage=1 00:07:08.341 --rc genhtml_legend=1 00:07:08.341 --rc geninfo_all_blocks=1 00:07:08.341 --rc geninfo_unexecuted_blocks=1 00:07:08.341 00:07:08.341 ' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.341 --rc genhtml_branch_coverage=1 00:07:08.341 --rc genhtml_function_coverage=1 00:07:08.341 --rc genhtml_legend=1 00:07:08.341 --rc geninfo_all_blocks=1 00:07:08.341 --rc geninfo_unexecuted_blocks=1 00:07:08.341 00:07:08.341 ' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.341 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.341 ************************************ 00:07:08.341 START TEST nvmf_abort 00:07:08.341 ************************************ 00:07:08.341 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:08.600 * Looking for test storage... 00:07:08.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:08.600 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.601 --rc genhtml_branch_coverage=1 00:07:08.601 --rc genhtml_function_coverage=1 00:07:08.601 --rc genhtml_legend=1 00:07:08.601 --rc geninfo_all_blocks=1 00:07:08.601 --rc geninfo_unexecuted_blocks=1 00:07:08.601 00:07:08.601 ' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.601 --rc genhtml_branch_coverage=1 00:07:08.601 --rc genhtml_function_coverage=1 00:07:08.601 --rc genhtml_legend=1 00:07:08.601 --rc geninfo_all_blocks=1 00:07:08.601 --rc geninfo_unexecuted_blocks=1 00:07:08.601 00:07:08.601 ' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.601 --rc genhtml_branch_coverage=1 00:07:08.601 --rc genhtml_function_coverage=1 00:07:08.601 --rc genhtml_legend=1 00:07:08.601 --rc geninfo_all_blocks=1 00:07:08.601 --rc geninfo_unexecuted_blocks=1 00:07:08.601 00:07:08.601 ' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.601 --rc genhtml_branch_coverage=1 00:07:08.601 --rc genhtml_function_coverage=1 00:07:08.601 --rc genhtml_legend=1 00:07:08.601 --rc geninfo_all_blocks=1 00:07:08.601 --rc geninfo_unexecuted_blocks=1 00:07:08.601 00:07:08.601 ' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:08.601 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:08.602 Cannot find device "nvmf_init_br" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:08.602 Cannot find device "nvmf_init_br2" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:08.602 Cannot find device "nvmf_tgt_br" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.602 Cannot find device "nvmf_tgt_br2" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:08.602 Cannot find device "nvmf_init_br" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:08.602 Cannot find device "nvmf_init_br2" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:08.602 Cannot find device "nvmf_tgt_br" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:08.602 Cannot find device "nvmf_tgt_br2" 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:07:08.602 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:08.860 Cannot find device "nvmf_br" 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:08.860 Cannot find device "nvmf_init_if" 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:08.860 Cannot find device "nvmf_init_if2" 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.860 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:09.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.212 ms 00:07:09.119 00:07:09.119 --- 10.0.0.3 ping statistics --- 00:07:09.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.119 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:09.119 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:09.119 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:07:09.119 00:07:09.119 --- 10.0.0.4 ping statistics --- 00:07:09.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.119 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:09.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:09.119 00:07:09.119 --- 10.0.0.1 ping statistics --- 00:07:09.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.119 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:09.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:07:09.119 00:07:09.119 --- 10.0.0.2 ping statistics --- 00:07:09.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.119 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # return 0 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=62136 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 62136 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 62136 ']' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.119 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.378 [2024-10-01 15:20:08.299925] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:09.378 [2024-10-01 15:20:08.300075] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.378 [2024-10-01 15:20:08.457706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.378 [2024-10-01 15:20:08.530942] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.378 [2024-10-01 15:20:08.530997] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.378 [2024-10-01 15:20:08.531011] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.378 [2024-10-01 15:20:08.531021] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.378 [2024-10-01 15:20:08.531030] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.378 [2024-10-01 15:20:08.531144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.378 [2024-10-01 15:20:08.531303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.378 [2024-10-01 15:20:08.531307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.637 [2024-10-01 15:20:08.660114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.637 Malloc0 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.637 Delay0 00:07:09.637 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 [2024-10-01 15:20:08.730240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.638 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:09.896 [2024-10-01 15:20:08.923210] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:11.798 Initializing NVMe Controllers 00:07:11.798 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:11.798 controller IO queue size 128 less than required 00:07:11.798 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:11.798 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:11.798 Initialization complete. Launching workers. 00:07:11.798 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 24684 00:07:11.798 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24745, failed to submit 62 00:07:11.798 success 24688, unsuccessful 57, failed 0 00:07:11.798 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.798 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.798 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.056 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.056 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:12.056 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:12.056 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:12.056 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.056 rmmod nvme_tcp 00:07:12.056 rmmod nvme_fabrics 00:07:12.056 rmmod nvme_keyring 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 62136 ']' 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 62136 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 62136 ']' 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 62136 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62136 00:07:12.056 killing process with pid 62136 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62136' 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 62136 00:07:12.056 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 62136 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.316 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:07:12.576 00:07:12.576 real 0m4.063s 00:07:12.576 user 0m10.276s 00:07:12.576 sys 0m1.084s 00:07:12.576 ************************************ 00:07:12.576 END TEST nvmf_abort 00:07:12.576 ************************************ 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.576 ************************************ 00:07:12.576 START TEST nvmf_ns_hotplug_stress 00:07:12.576 ************************************ 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:12.576 * Looking for test storage... 00:07:12.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.576 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:12.835 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:12.836 Cannot find device "nvmf_init_br" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:12.836 Cannot find device "nvmf_init_br2" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:12.836 Cannot find device "nvmf_tgt_br" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.836 Cannot find device "nvmf_tgt_br2" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:12.836 Cannot find device "nvmf_init_br" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:12.836 Cannot find device "nvmf_init_br2" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:12.836 Cannot find device "nvmf_tgt_br" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:12.836 Cannot find device "nvmf_tgt_br2" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:12.836 Cannot find device "nvmf_br" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:12.836 Cannot find device "nvmf_init_if" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:12.836 Cannot find device "nvmf_init_if2" 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:12.836 15:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:13.094 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:13.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:13.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:07:13.095 00:07:13.095 --- 10.0.0.3 ping statistics --- 00:07:13.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.095 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:13.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:13.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:07:13.095 00:07:13.095 --- 10.0.0.4 ping statistics --- 00:07:13.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.095 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:13.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:13.095 00:07:13.095 --- 10.0.0.1 ping statistics --- 00:07:13.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.095 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:13.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:07:13.095 00:07:13.095 --- 10.0.0.2 ping statistics --- 00:07:13.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.095 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # return 0 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=62418 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 62418 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 62418 ']' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.095 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.095 [2024-10-01 15:20:12.224309] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:13.095 [2024-10-01 15:20:12.224495] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.352 [2024-10-01 15:20:12.364187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.352 [2024-10-01 15:20:12.432702] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.352 [2024-10-01 15:20:12.432787] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.352 [2024-10-01 15:20:12.432807] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.352 [2024-10-01 15:20:12.432823] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.352 [2024-10-01 15:20:12.432837] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.352 [2024-10-01 15:20:12.432940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.352 [2024-10-01 15:20:12.433717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.352 [2024-10-01 15:20:12.433731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:13.609 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:13.865 [2024-10-01 15:20:12.853009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.865 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.123 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:14.382 [2024-10-01 15:20:13.506365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:14.382 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:14.948 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:15.208 Malloc0 00:07:15.208 15:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:15.467 Delay0 00:07:15.467 15:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.730 15:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:15.988 NULL1 00:07:15.988 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:16.247 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62544 00:07:16.247 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:16.247 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.247 15:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:17.622 Read completed with error (sct=0, sc=11) 00:07:17.622 15:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.881 15:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:17.881 15:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:18.139 true 00:07:18.139 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:18.139 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.073 15:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.330 15:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:19.330 15:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:19.586 true 00:07:19.586 15:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:19.586 15:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.152 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.669 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:20.669 15:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:20.927 true 00:07:20.927 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:20.927 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.494 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.759 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:21.759 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:22.046 true 00:07:22.046 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:22.046 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.303 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.867 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:22.867 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:23.124 true 00:07:23.124 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:23.124 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.381 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.638 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:23.638 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:24.203 true 00:07:24.203 15:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:24.203 15:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.460 15:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.719 15:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:24.719 15:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:24.976 true 00:07:24.976 15:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:24.976 15:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.541 15:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.799 15:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:25.799 15:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:26.057 true 00:07:26.057 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:26.057 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.315 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.572 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:26.573 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:26.830 true 00:07:26.830 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:26.830 15:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.396 15:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.653 15:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:27.653 15:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:27.911 true 00:07:27.911 15:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:27.911 15:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.170 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.429 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:28.429 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:28.687 true 00:07:28.687 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:28.687 15:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.621 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.879 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:29.879 15:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:30.136 true 00:07:30.136 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:30.136 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.393 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.694 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:30.694 15:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:30.974 true 00:07:30.974 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:30.974 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.233 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.799 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:31.799 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:31.799 true 00:07:31.799 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:31.799 15:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.057 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.623 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:32.623 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:32.623 true 00:07:32.623 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:32.623 15:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.558 15:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.817 15:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:33.817 15:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:34.074 true 00:07:34.074 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:34.074 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.642 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.900 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:34.900 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:35.159 true 00:07:35.159 15:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:35.159 15:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.533 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.792 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:36.792 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:37.049 true 00:07:37.307 15:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:37.307 15:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.875 15:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.136 15:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:38.136 15:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:38.702 true 00:07:38.702 15:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:38.702 15:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.268 15:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.784 15:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:39.784 15:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.045 true 00:07:40.045 15:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:40.046 15:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.304 15:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.870 15:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:40.870 15:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.128 true 00:07:41.128 15:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:41.128 15:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.575 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.834 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:42.834 15:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:43.092 true 00:07:43.092 15:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:43.092 15:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.027 15:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.285 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:44.285 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:44.543 true 00:07:44.543 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:44.543 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.916 15:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.219 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:46.219 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:46.493 true 00:07:46.493 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:46.493 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.059 Initializing NVMe Controllers 00:07:47.059 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.059 Controller IO queue size 128, less than required. 00:07:47.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.059 Controller IO queue size 128, less than required. 00:07:47.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.059 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.059 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:47.059 Initialization complete. Launching workers. 00:07:47.059 ======================================================== 00:07:47.059 Latency(us) 00:07:47.059 Device Information : IOPS MiB/s Average min max 00:07:47.059 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2033.86 0.99 31391.00 3188.19 1023990.41 00:07:47.059 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8807.99 4.30 14532.85 3215.53 698690.57 00:07:47.059 ======================================================== 00:07:47.059 Total : 10841.84 5.29 17695.33 3188.19 1023990.41 00:07:47.059 00:07:47.319 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.577 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:47.577 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:47.835 true 00:07:47.835 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62544 00:07:47.835 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62544) - No such process 00:07:47.835 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62544 00:07:47.835 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.092 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.350 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:48.350 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:48.350 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:48.350 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.350 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:48.608 null0 00:07:48.868 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.868 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.868 15:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:49.127 null1 00:07:49.127 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.127 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.127 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:49.385 null2 00:07:49.385 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.385 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.385 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:49.643 null3 00:07:49.643 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.643 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.643 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:49.902 null4 00:07:49.902 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.902 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.902 15:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:50.159 null5 00:07:50.159 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.159 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.159 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:50.725 null6 00:07:50.725 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.725 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.725 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:50.983 null7 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63514 63515 63517 63518 63519 63522 63524 63526 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.983 15:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.240 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.240 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.240 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.240 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.497 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.754 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.011 15:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.011 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.269 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.527 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.785 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.785 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.785 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.785 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.785 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.785 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.042 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.042 15:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.042 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.299 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.558 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.558 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.558 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.558 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.558 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.816 15:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.074 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.333 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.592 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.850 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.850 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.108 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.366 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.623 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.880 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.880 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.880 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.880 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.880 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.138 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.396 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.653 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.654 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.654 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.912 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.912 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.912 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.912 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.912 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.912 15:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.912 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.912 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.912 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.169 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.170 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.170 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.170 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.170 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.427 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.685 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.942 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.942 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.942 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.942 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.942 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.942 15:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.942 rmmod nvme_tcp 00:07:57.942 rmmod nvme_fabrics 00:07:57.942 rmmod nvme_keyring 00:07:57.942 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 62418 ']' 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 62418 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 62418 ']' 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 62418 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62418 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:58.200 killing process with pid 62418 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62418' 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 62418 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 62418 00:07:58.200 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:58.201 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:07:58.458 ************************************ 00:07:58.458 END TEST nvmf_ns_hotplug_stress 00:07:58.458 ************************************ 00:07:58.458 00:07:58.458 real 0m45.990s 00:07:58.458 user 3m48.422s 00:07:58.458 sys 0m13.058s 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.458 ************************************ 00:07:58.458 START TEST nvmf_delete_subsystem 00:07:58.458 ************************************ 00:07:58.458 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.717 * Looking for test storage... 00:07:58.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:58.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.717 --rc genhtml_branch_coverage=1 00:07:58.717 --rc genhtml_function_coverage=1 00:07:58.717 --rc genhtml_legend=1 00:07:58.717 --rc geninfo_all_blocks=1 00:07:58.717 --rc geninfo_unexecuted_blocks=1 00:07:58.717 00:07:58.717 ' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:58.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.717 --rc genhtml_branch_coverage=1 00:07:58.717 --rc genhtml_function_coverage=1 00:07:58.717 --rc genhtml_legend=1 00:07:58.717 --rc geninfo_all_blocks=1 00:07:58.717 --rc geninfo_unexecuted_blocks=1 00:07:58.717 00:07:58.717 ' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:58.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.717 --rc genhtml_branch_coverage=1 00:07:58.717 --rc genhtml_function_coverage=1 00:07:58.717 --rc genhtml_legend=1 00:07:58.717 --rc geninfo_all_blocks=1 00:07:58.717 --rc geninfo_unexecuted_blocks=1 00:07:58.717 00:07:58.717 ' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:58.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.717 --rc genhtml_branch_coverage=1 00:07:58.717 --rc genhtml_function_coverage=1 00:07:58.717 --rc genhtml_legend=1 00:07:58.717 --rc geninfo_all_blocks=1 00:07:58.717 --rc geninfo_unexecuted_blocks=1 00:07:58.717 00:07:58.717 ' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.717 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.718 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:58.718 Cannot find device "nvmf_init_br" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:58.718 Cannot find device "nvmf_init_br2" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:58.718 Cannot find device "nvmf_tgt_br" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.718 Cannot find device "nvmf_tgt_br2" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:58.718 Cannot find device "nvmf_init_br" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:58.718 Cannot find device "nvmf_init_br2" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:58.718 Cannot find device "nvmf_tgt_br" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:58.718 Cannot find device "nvmf_tgt_br2" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:58.718 Cannot find device "nvmf_br" 00:07:58.718 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:07:58.719 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:58.719 Cannot find device "nvmf_init_if" 00:07:58.719 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:07:58.719 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:58.977 Cannot find device "nvmf_init_if2" 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:58.977 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:58.978 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:58.978 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:58.978 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:58.978 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:58.978 15:20:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:58.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:58.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:07:58.978 00:07:58.978 --- 10.0.0.3 ping statistics --- 00:07:58.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.978 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:58.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:58.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:07:58.978 00:07:58.978 --- 10.0.0.4 ping statistics --- 00:07:58.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.978 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:58.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:58.978 00:07:58.978 --- 10.0.0.1 ping statistics --- 00:07:58.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.978 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:58.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:07:58.978 00:07:58.978 --- 10.0.0.2 ping statistics --- 00:07:58.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.978 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # return 0 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:58.978 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=64937 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 64937 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 64937 ']' 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.237 15:20:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.237 [2024-10-01 15:20:58.225017] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:07:59.237 [2024-10-01 15:20:58.225136] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.237 [2024-10-01 15:20:58.374032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.495 [2024-10-01 15:20:58.463810] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.495 [2024-10-01 15:20:58.463878] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.495 [2024-10-01 15:20:58.463897] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.495 [2024-10-01 15:20:58.463910] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.495 [2024-10-01 15:20:58.463922] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.495 [2024-10-01 15:20:58.464081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.495 [2024-10-01 15:20:58.464097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 [2024-10-01 15:20:59.356458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 [2024-10-01 15:20:59.372619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 NULL1 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 Delay0 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.426 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64988 00:08:00.427 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:00.427 15:20:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:00.427 [2024-10-01 15:20:59.577244] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:02.358 15:21:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.359 15:21:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.359 15:21:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 starting I/O failed: -6 00:08:02.617 [2024-10-01 15:21:01.612007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b13b20 is same with the state(6) to be set 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Write completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.617 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 [2024-10-01 15:21:01.613383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53be0 is same with the state(6) to be set 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 starting I/O failed: -6 00:08:02.618 [2024-10-01 15:21:01.616310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef14000c00 is same with the state(6) to be set 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Write completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:02.618 Read completed with error (sct=0, sc=8) 00:08:03.553 [2024-10-01 15:21:02.591497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ffb0 is same with the state(6) to be set 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 [2024-10-01 15:21:02.613007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b148b0 is same with the state(6) to be set 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 [2024-10-01 15:21:02.613272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b13d00 is same with the state(6) to be set 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 [2024-10-01 15:21:02.615862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef1400cfe0 is same with the state(6) to be set 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Write completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 Read completed with error (sct=0, sc=8) 00:08:03.553 [2024-10-01 15:21:02.616652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef1400d7c0 is same with the state(6) to be set 00:08:03.553 Initializing NVMe Controllers 00:08:03.553 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.553 Controller IO queue size 128, less than required. 00:08:03.553 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:03.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:03.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:03.553 Initialization complete. Launching workers. 00:08:03.553 ======================================================== 00:08:03.554 Latency(us) 00:08:03.554 Device Information : IOPS MiB/s Average min max 00:08:03.554 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.46 0.08 913574.60 1397.01 1011164.66 00:08:03.554 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.50 0.08 1022511.94 1288.05 2003080.88 00:08:03.554 ======================================================== 00:08:03.554 Total : 318.96 0.16 967025.17 1288.05 2003080.88 00:08:03.554 00:08:03.554 [2024-10-01 15:21:02.617803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0ffb0 (9): Bad file descriptor 00:08:03.554 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:03.554 15:21:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.554 15:21:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:03.554 15:21:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64988 00:08:03.554 15:21:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64988 00:08:04.120 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64988) - No such process 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64988 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 64988 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 64988 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.120 [2024-10-01 15:21:03.144160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65034 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:04.120 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.378 [2024-10-01 15:21:03.319595] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:04.636 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.636 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:04.636 15:21:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.201 15:21:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.201 15:21:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:05.201 15:21:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.789 15:21:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.789 15:21:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:05.789 15:21:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.066 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.066 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:06.066 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.631 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.631 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:06.631 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.195 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.195 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:07.196 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.452 Initializing NVMe Controllers 00:08:07.452 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.452 Controller IO queue size 128, less than required. 00:08:07.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.452 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:07.452 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:07.452 Initialization complete. Launching workers. 00:08:07.452 ======================================================== 00:08:07.452 Latency(us) 00:08:07.452 Device Information : IOPS MiB/s Average min max 00:08:07.452 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002997.80 1000148.60 1010776.73 00:08:07.452 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005600.89 1000410.51 1041536.45 00:08:07.452 ======================================================== 00:08:07.452 Total : 256.00 0.12 1004299.34 1000148.60 1041536.45 00:08:07.452 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65034 00:08:07.709 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65034) - No such process 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65034 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.709 rmmod nvme_tcp 00:08:07.709 rmmod nvme_fabrics 00:08:07.709 rmmod nvme_keyring 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 64937 ']' 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 64937 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 64937 ']' 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 64937 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64937 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.709 killing process with pid 64937 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64937' 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 64937 00:08:07.709 15:21:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 64937 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:07.966 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:07.967 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.967 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:07.967 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:07.967 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:07.967 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:07.967 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:08:08.223 00:08:08.223 real 0m9.661s 00:08:08.223 user 0m29.067s 00:08:08.223 sys 0m1.652s 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.223 ************************************ 00:08:08.223 END TEST nvmf_delete_subsystem 00:08:08.223 ************************************ 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.223 ************************************ 00:08:08.223 START TEST nvmf_host_management 00:08:08.223 ************************************ 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:08.223 * Looking for test storage... 00:08:08.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:08.223 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:08.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.482 --rc genhtml_branch_coverage=1 00:08:08.482 --rc genhtml_function_coverage=1 00:08:08.482 --rc genhtml_legend=1 00:08:08.482 --rc geninfo_all_blocks=1 00:08:08.482 --rc geninfo_unexecuted_blocks=1 00:08:08.482 00:08:08.482 ' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:08.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.482 --rc genhtml_branch_coverage=1 00:08:08.482 --rc genhtml_function_coverage=1 00:08:08.482 --rc genhtml_legend=1 00:08:08.482 --rc geninfo_all_blocks=1 00:08:08.482 --rc geninfo_unexecuted_blocks=1 00:08:08.482 00:08:08.482 ' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:08.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.482 --rc genhtml_branch_coverage=1 00:08:08.482 --rc genhtml_function_coverage=1 00:08:08.482 --rc genhtml_legend=1 00:08:08.482 --rc geninfo_all_blocks=1 00:08:08.482 --rc geninfo_unexecuted_blocks=1 00:08:08.482 00:08:08.482 ' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:08.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.482 --rc genhtml_branch_coverage=1 00:08:08.482 --rc genhtml_function_coverage=1 00:08:08.482 --rc genhtml_legend=1 00:08:08.482 --rc geninfo_all_blocks=1 00:08:08.482 --rc geninfo_unexecuted_blocks=1 00:08:08.482 00:08:08.482 ' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.482 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:08.483 Cannot find device "nvmf_init_br" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:08.483 Cannot find device "nvmf_init_br2" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:08.483 Cannot find device "nvmf_tgt_br" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.483 Cannot find device "nvmf_tgt_br2" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:08.483 Cannot find device "nvmf_init_br" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:08.483 Cannot find device "nvmf_init_br2" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:08.483 Cannot find device "nvmf_tgt_br" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:08.483 Cannot find device "nvmf_tgt_br2" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:08.483 Cannot find device "nvmf_br" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:08.483 Cannot find device "nvmf_init_if" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:08.483 Cannot find device "nvmf_init_if2" 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:08.483 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:08.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:08:08.742 00:08:08.742 --- 10.0.0.3 ping statistics --- 00:08:08.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.742 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:08.742 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:08.742 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:08.742 00:08:08.742 --- 10.0.0.4 ping statistics --- 00:08:08.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.742 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:08.742 00:08:08.742 --- 10.0.0.1 ping statistics --- 00:08:08.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.742 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:08.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:08.742 00:08:08.742 --- 10.0.0.2 ping statistics --- 00:08:08.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.742 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.742 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=65327 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 65327 00:08:09.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 65327 ']' 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.002 15:21:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.002 [2024-10-01 15:21:07.980301] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:09.002 [2024-10-01 15:21:07.980443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.002 [2024-10-01 15:21:08.119729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.261 [2024-10-01 15:21:08.199852] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.261 [2024-10-01 15:21:08.200122] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.261 [2024-10-01 15:21:08.200301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.261 [2024-10-01 15:21:08.200620] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.261 [2024-10-01 15:21:08.200743] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.261 [2024-10-01 15:21:08.200946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.261 [2024-10-01 15:21:08.200994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.261 [2024-10-01 15:21:08.201095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:09.261 [2024-10-01 15:21:08.201104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.196 [2024-10-01 15:21:09.047363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.196 Malloc0 00:08:10.196 [2024-10-01 15:21:09.102597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:10.196 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65399 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65399 /var/tmp/bdevperf.sock 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:10.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 65399 ']' 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:10.197 { 00:08:10.197 "params": { 00:08:10.197 "name": "Nvme$subsystem", 00:08:10.197 "trtype": "$TEST_TRANSPORT", 00:08:10.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.197 "adrfam": "ipv4", 00:08:10.197 "trsvcid": "$NVMF_PORT", 00:08:10.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.197 "hdgst": ${hdgst:-false}, 00:08:10.197 "ddgst": ${ddgst:-false} 00:08:10.197 }, 00:08:10.197 "method": "bdev_nvme_attach_controller" 00:08:10.197 } 00:08:10.197 EOF 00:08:10.197 )") 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:10.197 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:10.197 "params": { 00:08:10.197 "name": "Nvme0", 00:08:10.197 "trtype": "tcp", 00:08:10.197 "traddr": "10.0.0.3", 00:08:10.197 "adrfam": "ipv4", 00:08:10.197 "trsvcid": "4420", 00:08:10.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:10.197 "hdgst": false, 00:08:10.197 "ddgst": false 00:08:10.197 }, 00:08:10.197 "method": "bdev_nvme_attach_controller" 00:08:10.197 }' 00:08:10.197 [2024-10-01 15:21:09.196438] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:10.197 [2024-10-01 15:21:09.196536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65399 ] 00:08:10.197 [2024-10-01 15:21:09.344598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.469 [2024-10-01 15:21:09.417506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.469 Running I/O for 10 seconds... 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:10.469 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.728 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.728 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=20 00:08:10.728 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 20 -ge 100 ']' 00:08:10.728 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.989 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.989 [2024-10-01 15:21:09.976481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.989 [2024-10-01 15:21:09.976777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.976992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110f60 is same with the state(6) to be set 00:08:10.990 [2024-10-01 15:21:09.977193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.977775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.977794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.978068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.978224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.978399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.978569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.978592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.978608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.978618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.978629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.978638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.978649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.990 [2024-10-01 15:21:09.978659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.990 [2024-10-01 15:21:09.978671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.978993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.991 [2024-10-01 15:21:09.979845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.991 [2024-10-01 15:21:09.979858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.992 [2024-10-01 15:21:09.979869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:10.992 [2024-10-01 15:21:09.979880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ee4f0 is same with the state(6) to be set 00:08:10.992 [2024-10-01 15:21:09.979931] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5ee4f0 was disconnected and freed. reset controller. 00:08:10.992 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.992 [2024-10-01 15:21:09.981234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:10.992 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:10.992 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.992 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.992 task offset: 57344 on job bdev=Nvme0n1 fails 00:08:10.992 00:08:10.992 Latency(us) 00:08:10.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.992 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:10.992 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:10.992 Verification LBA range: start 0x0 length 0x400 00:08:10.992 Nvme0n1 : 0.41 1084.23 67.76 154.89 0.00 49805.00 6345.08 60293.12 00:08:10.992 =================================================================================================================== 00:08:10.992 Total : 1084.23 67.76 154.89 0.00 49805.00 6345.08 60293.12 00:08:10.992 [2024-10-01 15:21:09.983788] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.992 [2024-10-01 15:21:09.983826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ee730 (9): Bad file descriptor 00:08:10.992 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.992 15:21:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:10.992 [2024-10-01 15:21:09.995501] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65399 00:08:11.927 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65399) - No such process 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:11.927 { 00:08:11.927 "params": { 00:08:11.927 "name": "Nvme$subsystem", 00:08:11.927 "trtype": "$TEST_TRANSPORT", 00:08:11.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.927 "adrfam": "ipv4", 00:08:11.927 "trsvcid": "$NVMF_PORT", 00:08:11.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.927 "hdgst": ${hdgst:-false}, 00:08:11.927 "ddgst": ${ddgst:-false} 00:08:11.927 }, 00:08:11.927 "method": "bdev_nvme_attach_controller" 00:08:11.927 } 00:08:11.927 EOF 00:08:11.927 )") 00:08:11.927 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:11.927 15:21:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:11.927 15:21:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:11.927 15:21:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:11.927 "params": { 00:08:11.927 "name": "Nvme0", 00:08:11.927 "trtype": "tcp", 00:08:11.927 "traddr": "10.0.0.3", 00:08:11.927 "adrfam": "ipv4", 00:08:11.927 "trsvcid": "4420", 00:08:11.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:11.927 "hdgst": false, 00:08:11.927 "ddgst": false 00:08:11.927 }, 00:08:11.927 "method": "bdev_nvme_attach_controller" 00:08:11.927 }' 00:08:11.927 [2024-10-01 15:21:11.077329] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:11.927 [2024-10-01 15:21:11.077506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65445 ] 00:08:12.188 [2024-10-01 15:21:11.232572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.188 [2024-10-01 15:21:11.311866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.451 Running I/O for 1 seconds... 00:08:13.407 1304.00 IOPS, 81.50 MiB/s 00:08:13.407 Latency(us) 00:08:13.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:13.407 Verification LBA range: start 0x0 length 0x400 00:08:13.407 Nvme0n1 : 1.02 1340.15 83.76 0.00 0.00 46474.21 2993.80 41943.04 00:08:13.407 =================================================================================================================== 00:08:13.407 Total : 1340.15 83.76 0.00 0.00 46474.21 2993.80 41943.04 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.678 rmmod nvme_tcp 00:08:13.678 rmmod nvme_fabrics 00:08:13.678 rmmod nvme_keyring 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 65327 ']' 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 65327 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 65327 ']' 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 65327 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65327 00:08:13.678 killing process with pid 65327 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65327' 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 65327 00:08:13.678 15:21:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 65327 00:08:13.951 [2024-10-01 15:21:13.000354] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:13.951 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:14.214 00:08:14.214 real 0m5.967s 00:08:14.214 user 0m21.872s 00:08:14.214 sys 0m1.378s 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.214 ************************************ 00:08:14.214 END TEST nvmf_host_management 00:08:14.214 ************************************ 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.214 ************************************ 00:08:14.214 START TEST nvmf_lvol 00:08:14.214 ************************************ 00:08:14.214 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:14.214 * Looking for test storage... 00:08:14.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.474 --rc genhtml_branch_coverage=1 00:08:14.474 --rc genhtml_function_coverage=1 00:08:14.474 --rc genhtml_legend=1 00:08:14.474 --rc geninfo_all_blocks=1 00:08:14.474 --rc geninfo_unexecuted_blocks=1 00:08:14.474 00:08:14.474 ' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.474 --rc genhtml_branch_coverage=1 00:08:14.474 --rc genhtml_function_coverage=1 00:08:14.474 --rc genhtml_legend=1 00:08:14.474 --rc geninfo_all_blocks=1 00:08:14.474 --rc geninfo_unexecuted_blocks=1 00:08:14.474 00:08:14.474 ' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.474 --rc genhtml_branch_coverage=1 00:08:14.474 --rc genhtml_function_coverage=1 00:08:14.474 --rc genhtml_legend=1 00:08:14.474 --rc geninfo_all_blocks=1 00:08:14.474 --rc geninfo_unexecuted_blocks=1 00:08:14.474 00:08:14.474 ' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.474 --rc genhtml_branch_coverage=1 00:08:14.474 --rc genhtml_function_coverage=1 00:08:14.474 --rc genhtml_legend=1 00:08:14.474 --rc geninfo_all_blocks=1 00:08:14.474 --rc geninfo_unexecuted_blocks=1 00:08:14.474 00:08:14.474 ' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.474 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.474 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:14.475 Cannot find device "nvmf_init_br" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:14.475 Cannot find device "nvmf_init_br2" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:14.475 Cannot find device "nvmf_tgt_br" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.475 Cannot find device "nvmf_tgt_br2" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:14.475 Cannot find device "nvmf_init_br" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:14.475 Cannot find device "nvmf_init_br2" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:14.475 Cannot find device "nvmf_tgt_br" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:14.475 Cannot find device "nvmf_tgt_br2" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:14.475 Cannot find device "nvmf_br" 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:14.475 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:14.734 Cannot find device "nvmf_init_if" 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:14.734 Cannot find device "nvmf_init_if2" 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.734 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.734 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:14.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:14.734 00:08:14.734 --- 10.0.0.3 ping statistics --- 00:08:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.734 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:14.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:14.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:14.734 00:08:14.734 --- 10.0.0.4 ping statistics --- 00:08:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.734 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:14.734 00:08:14.734 --- 10.0.0.1 ping statistics --- 00:08:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.734 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:14.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:14.734 00:08:14.734 --- 10.0.0.2 ping statistics --- 00:08:14.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.734 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:14.734 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=65715 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 65715 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 65715 ']' 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.993 15:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.993 [2024-10-01 15:21:13.983728] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:14.993 [2024-10-01 15:21:13.984528] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.993 [2024-10-01 15:21:14.128304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.252 [2024-10-01 15:21:14.205903] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.252 [2024-10-01 15:21:14.206160] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.252 [2024-10-01 15:21:14.206317] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.252 [2024-10-01 15:21:14.206652] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.252 [2024-10-01 15:21:14.206782] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.252 [2024-10-01 15:21:14.207013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.252 [2024-10-01 15:21:14.207078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.252 [2024-10-01 15:21:14.207083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.252 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.511 [2024-10-01 15:21:14.670083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.770 15:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.028 15:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:16.028 15:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.286 15:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:16.286 15:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:16.544 15:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:17.111 15:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4b9b8cfe-1620-41ec-9602-bf25b9fec1db 00:08:17.111 15:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4b9b8cfe-1620-41ec-9602-bf25b9fec1db lvol 20 00:08:17.369 15:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=690ded87-e915-49d0-b991-d75e35eee226 00:08:17.369 15:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.627 15:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 690ded87-e915-49d0-b991-d75e35eee226 00:08:17.886 15:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:18.144 [2024-10-01 15:21:17.215809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:18.144 15:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:18.403 15:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65849 00:08:18.403 15:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:18.403 15:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:19.776 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 690ded87-e915-49d0-b991-d75e35eee226 MY_SNAPSHOT 00:08:19.776 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9bc8edae-3d3b-4b57-9d7b-f97e8d4736e3 00:08:19.776 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 690ded87-e915-49d0-b991-d75e35eee226 30 00:08:20.038 15:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9bc8edae-3d3b-4b57-9d7b-f97e8d4736e3 MY_CLONE 00:08:20.609 15:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3a16acdf-c888-4d83-9ce7-03a337e5eaa8 00:08:20.609 15:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 3a16acdf-c888-4d83-9ce7-03a337e5eaa8 00:08:21.176 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65849 00:08:29.289 Initializing NVMe Controllers 00:08:29.289 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:29.289 Controller IO queue size 128, less than required. 00:08:29.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.289 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:29.289 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:29.289 Initialization complete. Launching workers. 00:08:29.289 ======================================================== 00:08:29.289 Latency(us) 00:08:29.289 Device Information : IOPS MiB/s Average min max 00:08:29.289 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9775.40 38.19 13096.95 1181.27 66904.70 00:08:29.289 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9900.90 38.68 12940.82 1654.52 63265.60 00:08:29.289 ======================================================== 00:08:29.289 Total : 19676.30 76.86 13018.39 1181.27 66904.70 00:08:29.289 00:08:29.289 15:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.289 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 690ded87-e915-49d0-b991-d75e35eee226 00:08:29.289 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b9b8cfe-1620-41ec-9602-bf25b9fec1db 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.548 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.548 rmmod nvme_tcp 00:08:29.548 rmmod nvme_fabrics 00:08:29.806 rmmod nvme_keyring 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 65715 ']' 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 65715 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 65715 ']' 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 65715 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65715 00:08:29.806 killing process with pid 65715 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65715' 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 65715 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 65715 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:29.806 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:30.065 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:30.065 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:30.065 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:30.065 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.065 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:30.065 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:30.065 00:08:30.065 real 0m15.896s 00:08:30.065 user 1m6.004s 00:08:30.065 sys 0m3.857s 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.065 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.065 ************************************ 00:08:30.065 END TEST nvmf_lvol 00:08:30.065 ************************************ 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.327 ************************************ 00:08:30.327 START TEST nvmf_lvs_grow 00:08:30.327 ************************************ 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.327 * Looking for test storage... 00:08:30.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.327 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:30.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.328 --rc genhtml_branch_coverage=1 00:08:30.328 --rc genhtml_function_coverage=1 00:08:30.328 --rc genhtml_legend=1 00:08:30.328 --rc geninfo_all_blocks=1 00:08:30.328 --rc geninfo_unexecuted_blocks=1 00:08:30.328 00:08:30.328 ' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:30.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.328 --rc genhtml_branch_coverage=1 00:08:30.328 --rc genhtml_function_coverage=1 00:08:30.328 --rc genhtml_legend=1 00:08:30.328 --rc geninfo_all_blocks=1 00:08:30.328 --rc geninfo_unexecuted_blocks=1 00:08:30.328 00:08:30.328 ' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:30.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.328 --rc genhtml_branch_coverage=1 00:08:30.328 --rc genhtml_function_coverage=1 00:08:30.328 --rc genhtml_legend=1 00:08:30.328 --rc geninfo_all_blocks=1 00:08:30.328 --rc geninfo_unexecuted_blocks=1 00:08:30.328 00:08:30.328 ' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:30.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.328 --rc genhtml_branch_coverage=1 00:08:30.328 --rc genhtml_function_coverage=1 00:08:30.328 --rc genhtml_legend=1 00:08:30.328 --rc geninfo_all_blocks=1 00:08:30.328 --rc geninfo_unexecuted_blocks=1 00:08:30.328 00:08:30.328 ' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.328 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:30.328 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:30.329 Cannot find device "nvmf_init_br" 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:30.329 Cannot find device "nvmf_init_br2" 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:30.329 Cannot find device "nvmf_tgt_br" 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.329 Cannot find device "nvmf_tgt_br2" 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:30.329 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:30.587 Cannot find device "nvmf_init_br" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:30.587 Cannot find device "nvmf_init_br2" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:30.587 Cannot find device "nvmf_tgt_br" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:30.587 Cannot find device "nvmf_tgt_br2" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:30.587 Cannot find device "nvmf_br" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:30.587 Cannot find device "nvmf_init_if" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:30.587 Cannot find device "nvmf_init_if2" 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.587 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:30.588 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:30.588 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.588 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:30.588 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.588 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:30.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:30.846 00:08:30.846 --- 10.0.0.3 ping statistics --- 00:08:30.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.846 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:30.846 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:30.846 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:30.846 00:08:30.846 --- 10.0.0.4 ping statistics --- 00:08:30.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.846 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:30.846 00:08:30.846 --- 10.0.0.1 ping statistics --- 00:08:30.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.846 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:30.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:30.846 00:08:30.846 --- 10.0.0.2 ping statistics --- 00:08:30.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.846 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:30.846 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=66273 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 66273 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 66273 ']' 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.847 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.847 [2024-10-01 15:21:29.876281] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:30.847 [2024-10-01 15:21:29.876369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.105 [2024-10-01 15:21:30.014856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.106 [2024-10-01 15:21:30.085561] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.106 [2024-10-01 15:21:30.085625] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.106 [2024-10-01 15:21:30.085640] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.106 [2024-10-01 15:21:30.085651] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.106 [2024-10-01 15:21:30.085660] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.106 [2024-10-01 15:21:30.085693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.106 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.364 [2024-10-01 15:21:30.477093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.364 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:31.364 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.364 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.364 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.364 ************************************ 00:08:31.364 START TEST lvs_grow_clean 00:08:31.364 ************************************ 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:31.365 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.931 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.931 15:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.189 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:32.189 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:32.189 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:32.446 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:32.446 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:32.446 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 lvol 150 00:08:32.704 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=10a14770-5fee-45d9-9235-b1216f665506 00:08:32.704 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.704 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.962 [2024-10-01 15:21:32.044509] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.962 [2024-10-01 15:21:32.044605] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.962 true 00:08:32.962 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:32.962 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.529 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.529 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.786 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 10a14770-5fee-45d9-9235-b1216f665506 00:08:34.044 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:34.302 [2024-10-01 15:21:33.337175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.302 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:34.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66427 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66427 /var/tmp/bdevperf.sock 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 66427 ']' 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.560 15:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:34.818 [2024-10-01 15:21:33.757138] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:34.818 [2024-10-01 15:21:33.757234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66427 ] 00:08:34.818 [2024-10-01 15:21:33.893933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.818 [2024-10-01 15:21:33.954894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.076 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.076 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:35.076 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.333 Nvme0n1 00:08:35.333 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.593 [ 00:08:35.593 { 00:08:35.593 "aliases": [ 00:08:35.593 "10a14770-5fee-45d9-9235-b1216f665506" 00:08:35.593 ], 00:08:35.593 "assigned_rate_limits": { 00:08:35.593 "r_mbytes_per_sec": 0, 00:08:35.593 "rw_ios_per_sec": 0, 00:08:35.593 "rw_mbytes_per_sec": 0, 00:08:35.593 "w_mbytes_per_sec": 0 00:08:35.593 }, 00:08:35.593 "block_size": 4096, 00:08:35.593 "claimed": false, 00:08:35.593 "driver_specific": { 00:08:35.593 "mp_policy": "active_passive", 00:08:35.593 "nvme": [ 00:08:35.593 { 00:08:35.593 "ctrlr_data": { 00:08:35.593 "ana_reporting": false, 00:08:35.593 "cntlid": 1, 00:08:35.593 "firmware_revision": "25.01", 00:08:35.593 "model_number": "SPDK bdev Controller", 00:08:35.593 "multi_ctrlr": true, 00:08:35.593 "oacs": { 00:08:35.593 "firmware": 0, 00:08:35.593 "format": 0, 00:08:35.593 "ns_manage": 0, 00:08:35.593 "security": 0 00:08:35.593 }, 00:08:35.593 "serial_number": "SPDK0", 00:08:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.593 "vendor_id": "0x8086" 00:08:35.593 }, 00:08:35.593 "ns_data": { 00:08:35.593 "can_share": true, 00:08:35.593 "id": 1 00:08:35.593 }, 00:08:35.593 "trid": { 00:08:35.593 "adrfam": "IPv4", 00:08:35.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.593 "traddr": "10.0.0.3", 00:08:35.593 "trsvcid": "4420", 00:08:35.593 "trtype": "TCP" 00:08:35.593 }, 00:08:35.593 "vs": { 00:08:35.593 "nvme_version": "1.3" 00:08:35.593 } 00:08:35.593 } 00:08:35.593 ] 00:08:35.593 }, 00:08:35.593 "memory_domains": [ 00:08:35.593 { 00:08:35.593 "dma_device_id": "system", 00:08:35.593 "dma_device_type": 1 00:08:35.593 } 00:08:35.593 ], 00:08:35.593 "name": "Nvme0n1", 00:08:35.593 "num_blocks": 38912, 00:08:35.593 "numa_id": -1, 00:08:35.593 "product_name": "NVMe disk", 00:08:35.593 "supported_io_types": { 00:08:35.593 "abort": true, 00:08:35.593 "compare": true, 00:08:35.593 "compare_and_write": true, 00:08:35.593 "copy": true, 00:08:35.593 "flush": true, 00:08:35.593 "get_zone_info": false, 00:08:35.593 "nvme_admin": true, 00:08:35.593 "nvme_io": true, 00:08:35.593 "nvme_io_md": false, 00:08:35.593 "nvme_iov_md": false, 00:08:35.593 "read": true, 00:08:35.593 "reset": true, 00:08:35.593 "seek_data": false, 00:08:35.593 "seek_hole": false, 00:08:35.593 "unmap": true, 00:08:35.593 "write": true, 00:08:35.593 "write_zeroes": true, 00:08:35.593 "zcopy": false, 00:08:35.593 "zone_append": false, 00:08:35.593 "zone_management": false 00:08:35.593 }, 00:08:35.593 "uuid": "10a14770-5fee-45d9-9235-b1216f665506", 00:08:35.593 "zoned": false 00:08:35.593 } 00:08:35.593 ] 00:08:35.593 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.593 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66461 00:08:35.593 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.852 Running I/O for 10 seconds... 00:08:36.785 Latency(us) 00:08:36.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.785 Nvme0n1 : 1.00 6620.00 25.86 0.00 0.00 0.00 0.00 0.00 00:08:36.785 =================================================================================================================== 00:08:36.785 Total : 6620.00 25.86 0.00 0.00 0.00 0.00 0.00 00:08:36.785 00:08:37.743 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:37.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.743 Nvme0n1 : 2.00 7043.00 27.51 0.00 0.00 0.00 0.00 0.00 00:08:37.743 =================================================================================================================== 00:08:37.743 Total : 7043.00 27.51 0.00 0.00 0.00 0.00 0.00 00:08:37.743 00:08:38.000 true 00:08:38.000 15:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:38.000 15:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:38.565 15:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:38.565 15:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:38.565 15:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66461 00:08:38.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.822 Nvme0n1 : 3.00 7095.33 27.72 0.00 0.00 0.00 0.00 0.00 00:08:38.822 =================================================================================================================== 00:08:38.822 Total : 7095.33 27.72 0.00 0.00 0.00 0.00 0.00 00:08:38.822 00:08:39.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.755 Nvme0n1 : 4.00 7123.00 27.82 0.00 0.00 0.00 0.00 0.00 00:08:39.755 =================================================================================================================== 00:08:39.756 Total : 7123.00 27.82 0.00 0.00 0.00 0.00 0.00 00:08:39.756 00:08:40.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.694 Nvme0n1 : 5.00 7123.60 27.83 0.00 0.00 0.00 0.00 0.00 00:08:40.694 =================================================================================================================== 00:08:40.694 Total : 7123.60 27.83 0.00 0.00 0.00 0.00 0.00 00:08:40.694 00:08:42.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.069 Nvme0n1 : 6.00 7111.50 27.78 0.00 0.00 0.00 0.00 0.00 00:08:42.069 =================================================================================================================== 00:08:42.069 Total : 7111.50 27.78 0.00 0.00 0.00 0.00 0.00 00:08:42.069 00:08:43.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.004 Nvme0n1 : 7.00 7019.00 27.42 0.00 0.00 0.00 0.00 0.00 00:08:43.004 =================================================================================================================== 00:08:43.004 Total : 7019.00 27.42 0.00 0.00 0.00 0.00 0.00 00:08:43.004 00:08:43.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.937 Nvme0n1 : 8.00 6996.88 27.33 0.00 0.00 0.00 0.00 0.00 00:08:43.937 =================================================================================================================== 00:08:43.937 Total : 6996.88 27.33 0.00 0.00 0.00 0.00 0.00 00:08:43.937 00:08:44.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.870 Nvme0n1 : 9.00 6965.33 27.21 0.00 0.00 0.00 0.00 0.00 00:08:44.870 =================================================================================================================== 00:08:44.870 Total : 6965.33 27.21 0.00 0.00 0.00 0.00 0.00 00:08:44.870 00:08:45.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.843 Nvme0n1 : 10.00 6942.30 27.12 0.00 0.00 0.00 0.00 0.00 00:08:45.843 =================================================================================================================== 00:08:45.843 Total : 6942.30 27.12 0.00 0.00 0.00 0.00 0.00 00:08:45.843 00:08:45.843 00:08:45.843 Latency(us) 00:08:45.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.843 Nvme0n1 : 10.01 6948.75 27.14 0.00 0.00 18414.42 3634.27 77689.95 00:08:45.843 =================================================================================================================== 00:08:45.843 Total : 6948.75 27.14 0.00 0.00 18414.42 3634.27 77689.95 00:08:45.843 { 00:08:45.843 "results": [ 00:08:45.843 { 00:08:45.843 "job": "Nvme0n1", 00:08:45.843 "core_mask": "0x2", 00:08:45.843 "workload": "randwrite", 00:08:45.843 "status": "finished", 00:08:45.843 "queue_depth": 128, 00:08:45.843 "io_size": 4096, 00:08:45.843 "runtime": 10.009143, 00:08:45.843 "iops": 6948.746760836567, 00:08:45.843 "mibps": 27.14354203451784, 00:08:45.843 "io_failed": 0, 00:08:45.843 "io_timeout": 0, 00:08:45.843 "avg_latency_us": 18414.422654611855, 00:08:45.843 "min_latency_us": 3634.269090909091, 00:08:45.843 "max_latency_us": 77689.9490909091 00:08:45.843 } 00:08:45.843 ], 00:08:45.843 "core_count": 1 00:08:45.843 } 00:08:45.843 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66427 00:08:45.843 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 66427 ']' 00:08:45.843 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 66427 00:08:45.843 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66427 00:08:45.844 killing process with pid 66427 00:08:45.844 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.844 00:08:45.844 Latency(us) 00:08:45.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.844 =================================================================================================================== 00:08:45.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66427' 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 66427 00:08:45.844 15:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 66427 00:08:46.107 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:46.365 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.623 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.623 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:46.882 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.882 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:46.882 15:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.141 [2024-10-01 15:21:46.179277] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:47.141 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:47.400 2024/10/01 15:21:46 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c3f26f5b-33fd-4718-9e12-58feb37d9d69], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:47.400 request: 00:08:47.400 { 00:08:47.400 "method": "bdev_lvol_get_lvstores", 00:08:47.400 "params": { 00:08:47.400 "uuid": "c3f26f5b-33fd-4718-9e12-58feb37d9d69" 00:08:47.400 } 00:08:47.400 } 00:08:47.400 Got JSON-RPC error response 00:08:47.400 GoRPCClient: error on JSON-RPC call 00:08:47.659 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:47.659 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.659 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:47.659 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.659 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.917 aio_bdev 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 10a14770-5fee-45d9-9235-b1216f665506 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=10a14770-5fee-45d9-9235-b1216f665506 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.917 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.176 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10a14770-5fee-45d9-9235-b1216f665506 -t 2000 00:08:48.434 [ 00:08:48.434 { 00:08:48.434 "aliases": [ 00:08:48.434 "lvs/lvol" 00:08:48.434 ], 00:08:48.434 "assigned_rate_limits": { 00:08:48.434 "r_mbytes_per_sec": 0, 00:08:48.434 "rw_ios_per_sec": 0, 00:08:48.434 "rw_mbytes_per_sec": 0, 00:08:48.434 "w_mbytes_per_sec": 0 00:08:48.434 }, 00:08:48.434 "block_size": 4096, 00:08:48.434 "claimed": false, 00:08:48.434 "driver_specific": { 00:08:48.434 "lvol": { 00:08:48.434 "base_bdev": "aio_bdev", 00:08:48.434 "clone": false, 00:08:48.434 "esnap_clone": false, 00:08:48.434 "lvol_store_uuid": "c3f26f5b-33fd-4718-9e12-58feb37d9d69", 00:08:48.434 "num_allocated_clusters": 38, 00:08:48.434 "snapshot": false, 00:08:48.434 "thin_provision": false 00:08:48.434 } 00:08:48.434 }, 00:08:48.434 "name": "10a14770-5fee-45d9-9235-b1216f665506", 00:08:48.434 "num_blocks": 38912, 00:08:48.434 "product_name": "Logical Volume", 00:08:48.434 "supported_io_types": { 00:08:48.434 "abort": false, 00:08:48.434 "compare": false, 00:08:48.434 "compare_and_write": false, 00:08:48.434 "copy": false, 00:08:48.434 "flush": false, 00:08:48.434 "get_zone_info": false, 00:08:48.434 "nvme_admin": false, 00:08:48.434 "nvme_io": false, 00:08:48.434 "nvme_io_md": false, 00:08:48.434 "nvme_iov_md": false, 00:08:48.434 "read": true, 00:08:48.434 "reset": true, 00:08:48.434 "seek_data": true, 00:08:48.434 "seek_hole": true, 00:08:48.434 "unmap": true, 00:08:48.434 "write": true, 00:08:48.434 "write_zeroes": true, 00:08:48.434 "zcopy": false, 00:08:48.434 "zone_append": false, 00:08:48.434 "zone_management": false 00:08:48.434 }, 00:08:48.434 "uuid": "10a14770-5fee-45d9-9235-b1216f665506", 00:08:48.434 "zoned": false 00:08:48.434 } 00:08:48.434 ] 00:08:48.434 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:48.434 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.434 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:49.001 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:49.001 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:49.001 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.270 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.270 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 10a14770-5fee-45d9-9235-b1216f665506 00:08:49.568 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3f26f5b-33fd-4718-9e12-58feb37d9d69 00:08:50.133 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.391 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.650 ************************************ 00:08:50.650 END TEST lvs_grow_clean 00:08:50.650 ************************************ 00:08:50.650 00:08:50.650 real 0m19.237s 00:08:50.650 user 0m18.634s 00:08:50.650 sys 0m2.190s 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.650 ************************************ 00:08:50.650 START TEST lvs_grow_dirty 00:08:50.650 ************************************ 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.650 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.218 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.218 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.477 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dce12519-25c6-400c-ac22-c60e17393fd0 00:08:51.477 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.477 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:08:51.735 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.735 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.735 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dce12519-25c6-400c-ac22-c60e17393fd0 lvol 150 00:08:51.992 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:08:51.992 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.992 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.561 [2024-10-01 15:21:51.466475] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.561 [2024-10-01 15:21:51.466567] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.561 true 00:08:52.561 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:08:52.561 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.819 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.819 15:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.078 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:08:53.336 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:53.596 [2024-10-01 15:21:52.635056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.596 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:53.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66876 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66876 /var/tmp/bdevperf.sock 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 66876 ']' 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.858 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.858 [2024-10-01 15:21:52.964844] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:08:53.858 [2024-10-01 15:21:52.964943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66876 ] 00:08:54.117 [2024-10-01 15:21:53.098635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.117 [2024-10-01 15:21:53.183019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.117 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.117 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:54.117 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.690 Nvme0n1 00:08:54.690 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.949 [ 00:08:54.949 { 00:08:54.949 "aliases": [ 00:08:54.949 "57c6035c-dffa-4559-9e8b-8fa2d843ea87" 00:08:54.949 ], 00:08:54.949 "assigned_rate_limits": { 00:08:54.949 "r_mbytes_per_sec": 0, 00:08:54.949 "rw_ios_per_sec": 0, 00:08:54.949 "rw_mbytes_per_sec": 0, 00:08:54.949 "w_mbytes_per_sec": 0 00:08:54.949 }, 00:08:54.949 "block_size": 4096, 00:08:54.949 "claimed": false, 00:08:54.949 "driver_specific": { 00:08:54.949 "mp_policy": "active_passive", 00:08:54.949 "nvme": [ 00:08:54.949 { 00:08:54.949 "ctrlr_data": { 00:08:54.949 "ana_reporting": false, 00:08:54.949 "cntlid": 1, 00:08:54.949 "firmware_revision": "25.01", 00:08:54.949 "model_number": "SPDK bdev Controller", 00:08:54.949 "multi_ctrlr": true, 00:08:54.949 "oacs": { 00:08:54.949 "firmware": 0, 00:08:54.949 "format": 0, 00:08:54.949 "ns_manage": 0, 00:08:54.949 "security": 0 00:08:54.949 }, 00:08:54.949 "serial_number": "SPDK0", 00:08:54.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.949 "vendor_id": "0x8086" 00:08:54.949 }, 00:08:54.949 "ns_data": { 00:08:54.949 "can_share": true, 00:08:54.949 "id": 1 00:08:54.949 }, 00:08:54.949 "trid": { 00:08:54.949 "adrfam": "IPv4", 00:08:54.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.949 "traddr": "10.0.0.3", 00:08:54.949 "trsvcid": "4420", 00:08:54.949 "trtype": "TCP" 00:08:54.949 }, 00:08:54.949 "vs": { 00:08:54.949 "nvme_version": "1.3" 00:08:54.949 } 00:08:54.949 } 00:08:54.949 ] 00:08:54.949 }, 00:08:54.949 "memory_domains": [ 00:08:54.949 { 00:08:54.949 "dma_device_id": "system", 00:08:54.949 "dma_device_type": 1 00:08:54.949 } 00:08:54.949 ], 00:08:54.949 "name": "Nvme0n1", 00:08:54.949 "num_blocks": 38912, 00:08:54.949 "numa_id": -1, 00:08:54.949 "product_name": "NVMe disk", 00:08:54.949 "supported_io_types": { 00:08:54.949 "abort": true, 00:08:54.949 "compare": true, 00:08:54.949 "compare_and_write": true, 00:08:54.949 "copy": true, 00:08:54.949 "flush": true, 00:08:54.949 "get_zone_info": false, 00:08:54.949 "nvme_admin": true, 00:08:54.949 "nvme_io": true, 00:08:54.949 "nvme_io_md": false, 00:08:54.949 "nvme_iov_md": false, 00:08:54.949 "read": true, 00:08:54.949 "reset": true, 00:08:54.949 "seek_data": false, 00:08:54.949 "seek_hole": false, 00:08:54.949 "unmap": true, 00:08:54.949 "write": true, 00:08:54.949 "write_zeroes": true, 00:08:54.949 "zcopy": false, 00:08:54.949 "zone_append": false, 00:08:54.949 "zone_management": false 00:08:54.949 }, 00:08:54.949 "uuid": "57c6035c-dffa-4559-9e8b-8fa2d843ea87", 00:08:54.949 "zoned": false 00:08:54.949 } 00:08:54.949 ] 00:08:54.949 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66910 00:08:54.949 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.949 15:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.949 Running I/O for 10 seconds... 00:08:56.327 Latency(us) 00:08:56.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.327 Nvme0n1 : 1.00 7753.00 30.29 0.00 0.00 0.00 0.00 0.00 00:08:56.327 =================================================================================================================== 00:08:56.327 Total : 7753.00 30.29 0.00 0.00 0.00 0.00 0.00 00:08:56.327 00:08:56.894 15:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dce12519-25c6-400c-ac22-c60e17393fd0 00:08:57.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.152 Nvme0n1 : 2.00 7700.50 30.08 0.00 0.00 0.00 0.00 0.00 00:08:57.152 =================================================================================================================== 00:08:57.152 Total : 7700.50 30.08 0.00 0.00 0.00 0.00 0.00 00:08:57.152 00:08:57.460 true 00:08:57.460 15:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:08:57.460 15:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.718 15:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.718 15:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.718 15:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66910 00:08:57.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.976 Nvme0n1 : 3.00 7194.67 28.10 0.00 0.00 0.00 0.00 0.00 00:08:57.976 =================================================================================================================== 00:08:57.976 Total : 7194.67 28.10 0.00 0.00 0.00 0.00 0.00 00:08:57.976 00:08:59.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.354 Nvme0n1 : 4.00 7177.00 28.04 0.00 0.00 0.00 0.00 0.00 00:08:59.354 =================================================================================================================== 00:08:59.354 Total : 7177.00 28.04 0.00 0.00 0.00 0.00 0.00 00:08:59.354 00:08:59.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.921 Nvme0n1 : 5.00 7288.40 28.47 0.00 0.00 0.00 0.00 0.00 00:08:59.921 =================================================================================================================== 00:08:59.922 Total : 7288.40 28.47 0.00 0.00 0.00 0.00 0.00 00:08:59.922 00:09:01.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.298 Nvme0n1 : 6.00 7330.00 28.63 0.00 0.00 0.00 0.00 0.00 00:09:01.298 =================================================================================================================== 00:09:01.298 Total : 7330.00 28.63 0.00 0.00 0.00 0.00 0.00 00:09:01.298 00:09:02.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.235 Nvme0n1 : 7.00 7193.29 28.10 0.00 0.00 0.00 0.00 0.00 00:09:02.235 =================================================================================================================== 00:09:02.235 Total : 7193.29 28.10 0.00 0.00 0.00 0.00 0.00 00:09:02.235 00:09:03.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.170 Nvme0n1 : 8.00 7202.00 28.13 0.00 0.00 0.00 0.00 0.00 00:09:03.170 =================================================================================================================== 00:09:03.170 Total : 7202.00 28.13 0.00 0.00 0.00 0.00 0.00 00:09:03.170 00:09:04.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.156 Nvme0n1 : 9.00 7213.56 28.18 0.00 0.00 0.00 0.00 0.00 00:09:04.156 =================================================================================================================== 00:09:04.156 Total : 7213.56 28.18 0.00 0.00 0.00 0.00 0.00 00:09:04.156 00:09:05.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.090 Nvme0n1 : 10.00 7145.70 27.91 0.00 0.00 0.00 0.00 0.00 00:09:05.090 =================================================================================================================== 00:09:05.090 Total : 7145.70 27.91 0.00 0.00 0.00 0.00 0.00 00:09:05.090 00:09:05.090 00:09:05.090 Latency(us) 00:09:05.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.090 Nvme0n1 : 10.00 7155.34 27.95 0.00 0.00 17883.21 7923.90 170631.91 00:09:05.090 =================================================================================================================== 00:09:05.090 Total : 7155.34 27.95 0.00 0.00 17883.21 7923.90 170631.91 00:09:05.090 { 00:09:05.090 "results": [ 00:09:05.090 { 00:09:05.090 "job": "Nvme0n1", 00:09:05.090 "core_mask": "0x2", 00:09:05.090 "workload": "randwrite", 00:09:05.090 "status": "finished", 00:09:05.090 "queue_depth": 128, 00:09:05.090 "io_size": 4096, 00:09:05.090 "runtime": 10.004419, 00:09:05.090 "iops": 7155.338056113003, 00:09:05.090 "mibps": 27.95053928169142, 00:09:05.090 "io_failed": 0, 00:09:05.090 "io_timeout": 0, 00:09:05.090 "avg_latency_us": 17883.21293313099, 00:09:05.090 "min_latency_us": 7923.898181818182, 00:09:05.090 "max_latency_us": 170631.91272727272 00:09:05.090 } 00:09:05.090 ], 00:09:05.090 "core_count": 1 00:09:05.090 } 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66876 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 66876 ']' 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 66876 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66876 00:09:05.090 killing process with pid 66876 00:09:05.090 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.090 00:09:05.090 Latency(us) 00:09:05.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.090 =================================================================================================================== 00:09:05.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66876' 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 66876 00:09:05.090 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 66876 00:09:05.349 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.606 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.865 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:05.865 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66273 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66273 00:09:06.437 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66273 Killed "${NVMF_APP[@]}" "$@" 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=67078 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 67078 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 67078 ']' 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.437 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.437 [2024-10-01 15:22:05.428901] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:06.437 [2024-10-01 15:22:05.428996] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.437 [2024-10-01 15:22:05.569041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.695 [2024-10-01 15:22:05.639514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.695 [2024-10-01 15:22:05.639572] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.695 [2024-10-01 15:22:05.639585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.695 [2024-10-01 15:22:05.639595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.695 [2024-10-01 15:22:05.639603] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.695 [2024-10-01 15:22:05.639637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.695 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.954 [2024-10-01 15:22:06.063493] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:06.954 [2024-10-01 15:22:06.063819] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:06.954 [2024-10-01 15:22:06.063953] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.954 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.521 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57c6035c-dffa-4559-9e8b-8fa2d843ea87 -t 2000 00:09:07.779 [ 00:09:07.779 { 00:09:07.779 "aliases": [ 00:09:07.779 "lvs/lvol" 00:09:07.779 ], 00:09:07.779 "assigned_rate_limits": { 00:09:07.779 "r_mbytes_per_sec": 0, 00:09:07.779 "rw_ios_per_sec": 0, 00:09:07.779 "rw_mbytes_per_sec": 0, 00:09:07.779 "w_mbytes_per_sec": 0 00:09:07.779 }, 00:09:07.779 "block_size": 4096, 00:09:07.779 "claimed": false, 00:09:07.779 "driver_specific": { 00:09:07.779 "lvol": { 00:09:07.779 "base_bdev": "aio_bdev", 00:09:07.779 "clone": false, 00:09:07.779 "esnap_clone": false, 00:09:07.779 "lvol_store_uuid": "dce12519-25c6-400c-ac22-c60e17393fd0", 00:09:07.779 "num_allocated_clusters": 38, 00:09:07.779 "snapshot": false, 00:09:07.779 "thin_provision": false 00:09:07.779 } 00:09:07.779 }, 00:09:07.779 "name": "57c6035c-dffa-4559-9e8b-8fa2d843ea87", 00:09:07.779 "num_blocks": 38912, 00:09:07.779 "product_name": "Logical Volume", 00:09:07.779 "supported_io_types": { 00:09:07.779 "abort": false, 00:09:07.779 "compare": false, 00:09:07.779 "compare_and_write": false, 00:09:07.779 "copy": false, 00:09:07.779 "flush": false, 00:09:07.779 "get_zone_info": false, 00:09:07.779 "nvme_admin": false, 00:09:07.779 "nvme_io": false, 00:09:07.779 "nvme_io_md": false, 00:09:07.779 "nvme_iov_md": false, 00:09:07.779 "read": true, 00:09:07.779 "reset": true, 00:09:07.779 "seek_data": true, 00:09:07.779 "seek_hole": true, 00:09:07.779 "unmap": true, 00:09:07.779 "write": true, 00:09:07.779 "write_zeroes": true, 00:09:07.779 "zcopy": false, 00:09:07.779 "zone_append": false, 00:09:07.779 "zone_management": false 00:09:07.779 }, 00:09:07.779 "uuid": "57c6035c-dffa-4559-9e8b-8fa2d843ea87", 00:09:07.779 "zoned": false 00:09:07.779 } 00:09:07.779 ] 00:09:07.779 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:07.779 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:07.779 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:08.038 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:08.038 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:08.038 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:08.296 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:08.297 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.863 [2024-10-01 15:22:07.749394] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:08.863 15:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:09.122 2024/10/01 15:22:08 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:dce12519-25c6-400c-ac22-c60e17393fd0], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:09.122 request: 00:09:09.122 { 00:09:09.122 "method": "bdev_lvol_get_lvstores", 00:09:09.122 "params": { 00:09:09.122 "uuid": "dce12519-25c6-400c-ac22-c60e17393fd0" 00:09:09.122 } 00:09:09.122 } 00:09:09.122 Got JSON-RPC error response 00:09:09.122 GoRPCClient: error on JSON-RPC call 00:09:09.122 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:09.122 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:09.122 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:09.122 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:09.122 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.380 aio_bdev 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.380 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:09.638 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57c6035c-dffa-4559-9e8b-8fa2d843ea87 -t 2000 00:09:09.897 [ 00:09:09.897 { 00:09:09.897 "aliases": [ 00:09:09.897 "lvs/lvol" 00:09:09.897 ], 00:09:09.897 "assigned_rate_limits": { 00:09:09.897 "r_mbytes_per_sec": 0, 00:09:09.897 "rw_ios_per_sec": 0, 00:09:09.897 "rw_mbytes_per_sec": 0, 00:09:09.897 "w_mbytes_per_sec": 0 00:09:09.897 }, 00:09:09.897 "block_size": 4096, 00:09:09.897 "claimed": false, 00:09:09.897 "driver_specific": { 00:09:09.897 "lvol": { 00:09:09.897 "base_bdev": "aio_bdev", 00:09:09.897 "clone": false, 00:09:09.897 "esnap_clone": false, 00:09:09.897 "lvol_store_uuid": "dce12519-25c6-400c-ac22-c60e17393fd0", 00:09:09.897 "num_allocated_clusters": 38, 00:09:09.897 "snapshot": false, 00:09:09.897 "thin_provision": false 00:09:09.897 } 00:09:09.897 }, 00:09:09.897 "name": "57c6035c-dffa-4559-9e8b-8fa2d843ea87", 00:09:09.897 "num_blocks": 38912, 00:09:09.897 "product_name": "Logical Volume", 00:09:09.897 "supported_io_types": { 00:09:09.897 "abort": false, 00:09:09.897 "compare": false, 00:09:09.897 "compare_and_write": false, 00:09:09.897 "copy": false, 00:09:09.897 "flush": false, 00:09:09.897 "get_zone_info": false, 00:09:09.897 "nvme_admin": false, 00:09:09.897 "nvme_io": false, 00:09:09.897 "nvme_io_md": false, 00:09:09.897 "nvme_iov_md": false, 00:09:09.897 "read": true, 00:09:09.897 "reset": true, 00:09:09.897 "seek_data": true, 00:09:09.897 "seek_hole": true, 00:09:09.897 "unmap": true, 00:09:09.897 "write": true, 00:09:09.897 "write_zeroes": true, 00:09:09.897 "zcopy": false, 00:09:09.897 "zone_append": false, 00:09:09.897 "zone_management": false 00:09:09.897 }, 00:09:09.897 "uuid": "57c6035c-dffa-4559-9e8b-8fa2d843ea87", 00:09:09.897 "zoned": false 00:09:09.897 } 00:09:09.897 ] 00:09:09.897 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:09.897 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:09.897 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:10.464 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:10.464 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:10.464 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:10.722 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:10.722 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 57c6035c-dffa-4559-9e8b-8fa2d843ea87 00:09:10.979 15:22:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dce12519-25c6-400c-ac22-c60e17393fd0 00:09:11.259 15:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.517 15:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.082 00:09:12.082 real 0m21.172s 00:09:12.082 user 0m44.339s 00:09:12.082 sys 0m7.726s 00:09:12.082 15:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.082 15:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.082 ************************************ 00:09:12.082 END TEST lvs_grow_dirty 00:09:12.082 ************************************ 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:12.082 nvmf_trace.0 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:12.082 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:12.340 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.340 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:12.340 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.340 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.340 rmmod nvme_tcp 00:09:12.340 rmmod nvme_fabrics 00:09:12.341 rmmod nvme_keyring 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 67078 ']' 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 67078 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 67078 ']' 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 67078 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67078 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.341 killing process with pid 67078 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67078' 00:09:12.341 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 67078 00:09:12.599 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 67078 00:09:12.599 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:12.600 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:12.858 00:09:12.858 real 0m42.649s 00:09:12.858 user 1m9.811s 00:09:12.858 sys 0m10.833s 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.858 ************************************ 00:09:12.858 END TEST nvmf_lvs_grow 00:09:12.858 ************************************ 00:09:12.858 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.859 15:22:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:12.859 15:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.859 15:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.859 15:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.859 ************************************ 00:09:12.859 START TEST nvmf_bdev_io_wait 00:09:12.859 ************************************ 00:09:12.859 15:22:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:12.859 * Looking for test storage... 00:09:12.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.859 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:12.859 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:12.859 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.117 --rc genhtml_branch_coverage=1 00:09:13.117 --rc genhtml_function_coverage=1 00:09:13.117 --rc genhtml_legend=1 00:09:13.117 --rc geninfo_all_blocks=1 00:09:13.117 --rc geninfo_unexecuted_blocks=1 00:09:13.117 00:09:13.117 ' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.117 --rc genhtml_branch_coverage=1 00:09:13.117 --rc genhtml_function_coverage=1 00:09:13.117 --rc genhtml_legend=1 00:09:13.117 --rc geninfo_all_blocks=1 00:09:13.117 --rc geninfo_unexecuted_blocks=1 00:09:13.117 00:09:13.117 ' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.117 --rc genhtml_branch_coverage=1 00:09:13.117 --rc genhtml_function_coverage=1 00:09:13.117 --rc genhtml_legend=1 00:09:13.117 --rc geninfo_all_blocks=1 00:09:13.117 --rc geninfo_unexecuted_blocks=1 00:09:13.117 00:09:13.117 ' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.117 --rc genhtml_branch_coverage=1 00:09:13.117 --rc genhtml_function_coverage=1 00:09:13.117 --rc genhtml_legend=1 00:09:13.117 --rc geninfo_all_blocks=1 00:09:13.117 --rc geninfo_unexecuted_blocks=1 00:09:13.117 00:09:13.117 ' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.117 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:13.118 Cannot find device "nvmf_init_br" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:13.118 Cannot find device "nvmf_init_br2" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:13.118 Cannot find device "nvmf_tgt_br" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.118 Cannot find device "nvmf_tgt_br2" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:13.118 Cannot find device "nvmf_init_br" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:13.118 Cannot find device "nvmf_init_br2" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:13.118 Cannot find device "nvmf_tgt_br" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:13.118 Cannot find device "nvmf_tgt_br2" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:13.118 Cannot find device "nvmf_br" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:13.118 Cannot find device "nvmf_init_if" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:13.118 Cannot find device "nvmf_init_if2" 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.118 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:13.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:09:13.377 00:09:13.377 --- 10.0.0.3 ping statistics --- 00:09:13.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.377 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:13.377 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:13.377 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:09:13.377 00:09:13.377 --- 10.0.0.4 ping statistics --- 00:09:13.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.377 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:13.377 00:09:13.377 --- 10.0.0.1 ping statistics --- 00:09:13.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.377 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:13.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:13.377 00:09:13.377 --- 10.0.0.2 ping statistics --- 00:09:13.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.377 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:13.377 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=67557 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 67557 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 67557 ']' 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.635 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.635 [2024-10-01 15:22:12.626890] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:13.635 [2024-10-01 15:22:12.626993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.635 [2024-10-01 15:22:12.769953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.894 [2024-10-01 15:22:12.840042] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.894 [2024-10-01 15:22:12.840106] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.894 [2024-10-01 15:22:12.840121] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.894 [2024-10-01 15:22:12.840131] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.894 [2024-10-01 15:22:12.840140] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.894 [2024-10-01 15:22:12.840265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.894 [2024-10-01 15:22:12.840723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.894 [2024-10-01 15:22:12.841322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.894 [2024-10-01 15:22:12.841360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.894 15:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.894 [2024-10-01 15:22:13.013758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.894 Malloc0 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.894 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.153 [2024-10-01 15:22:13.071086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67597 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67599 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:14.153 { 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme$subsystem", 00:09:14.153 "trtype": "$TEST_TRANSPORT", 00:09:14.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "$NVMF_PORT", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.153 "hdgst": ${hdgst:-false}, 00:09:14.153 "ddgst": ${ddgst:-false} 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 } 00:09:14.153 EOF 00:09:14.153 )") 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67601 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:14.153 { 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme$subsystem", 00:09:14.153 "trtype": "$TEST_TRANSPORT", 00:09:14.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "$NVMF_PORT", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.153 "hdgst": ${hdgst:-false}, 00:09:14.153 "ddgst": ${ddgst:-false} 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 } 00:09:14.153 EOF 00:09:14.153 )") 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67604 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:14.153 { 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme$subsystem", 00:09:14.153 "trtype": "$TEST_TRANSPORT", 00:09:14.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "$NVMF_PORT", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.153 "hdgst": ${hdgst:-false}, 00:09:14.153 "ddgst": ${ddgst:-false} 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 } 00:09:14.153 EOF 00:09:14.153 )") 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme1", 00:09:14.153 "trtype": "tcp", 00:09:14.153 "traddr": "10.0.0.3", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "4420", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.153 "hdgst": false, 00:09:14.153 "ddgst": false 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 }' 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme1", 00:09:14.153 "trtype": "tcp", 00:09:14.153 "traddr": "10.0.0.3", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "4420", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.153 "hdgst": false, 00:09:14.153 "ddgst": false 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 }' 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme1", 00:09:14.153 "trtype": "tcp", 00:09:14.153 "traddr": "10.0.0.3", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "4420", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.153 "hdgst": false, 00:09:14.153 "ddgst": false 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 }' 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:14.153 { 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme$subsystem", 00:09:14.153 "trtype": "$TEST_TRANSPORT", 00:09:14.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "$NVMF_PORT", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.153 "hdgst": ${hdgst:-false}, 00:09:14.153 "ddgst": ${ddgst:-false} 00:09:14.153 }, 00:09:14.153 "method": "bdev_nvme_attach_controller" 00:09:14.153 } 00:09:14.153 EOF 00:09:14.153 )") 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:14.153 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:14.153 "params": { 00:09:14.153 "name": "Nvme1", 00:09:14.153 "trtype": "tcp", 00:09:14.153 "traddr": "10.0.0.3", 00:09:14.153 "adrfam": "ipv4", 00:09:14.153 "trsvcid": "4420", 00:09:14.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.154 "hdgst": false, 00:09:14.154 "ddgst": false 00:09:14.154 }, 00:09:14.154 "method": "bdev_nvme_attach_controller" 00:09:14.154 }' 00:09:14.154 [2024-10-01 15:22:13.135592] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:14.154 [2024-10-01 15:22:13.135681] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:14.154 [2024-10-01 15:22:13.157190] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:14.154 [2024-10-01 15:22:13.157304] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:14.154 [2024-10-01 15:22:13.165454] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:14.154 [2024-10-01 15:22:13.165543] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:14.154 [2024-10-01 15:22:13.169859] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:14.154 [2024-10-01 15:22:13.169942] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:14.154 15:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67597 00:09:14.154 [2024-10-01 15:22:13.314172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.412 [2024-10-01 15:22:13.354881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.412 [2024-10-01 15:22:13.369160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.412 [2024-10-01 15:22:13.400218] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.412 [2024-10-01 15:22:13.406558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:14.412 [2024-10-01 15:22:13.440548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.412 [2024-10-01 15:22:13.457465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:14.412 [2024-10-01 15:22:13.496463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:14.412 Running I/O for 1 seconds... 00:09:14.412 Running I/O for 1 seconds... 00:09:14.670 Running I/O for 1 seconds... 00:09:14.670 Running I/O for 1 seconds... 00:09:15.606 5946.00 IOPS, 23.23 MiB/s 00:09:15.606 Latency(us) 00:09:15.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.606 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:15.606 Nvme1n1 : 1.02 5979.25 23.36 0.00 0.00 21288.40 5630.14 42181.35 00:09:15.606 =================================================================================================================== 00:09:15.606 Total : 5979.25 23.36 0.00 0.00 21288.40 5630.14 42181.35 00:09:15.606 183816.00 IOPS, 718.03 MiB/s 00:09:15.606 Latency(us) 00:09:15.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.606 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:15.606 Nvme1n1 : 1.00 183449.40 716.60 0.00 0.00 693.94 316.51 1980.97 00:09:15.606 =================================================================================================================== 00:09:15.606 Total : 183449.40 716.60 0.00 0.00 693.94 316.51 1980.97 00:09:15.606 8454.00 IOPS, 33.02 MiB/s 00:09:15.606 Latency(us) 00:09:15.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.606 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:15.606 Nvme1n1 : 1.01 8530.81 33.32 0.00 0.00 14938.91 6702.55 25618.62 00:09:15.606 =================================================================================================================== 00:09:15.606 Total : 8530.81 33.32 0.00 0.00 14938.91 6702.55 25618.62 00:09:15.606 5496.00 IOPS, 21.47 MiB/s 00:09:15.606 Latency(us) 00:09:15.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.606 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:15.606 Nvme1n1 : 1.01 5590.51 21.84 0.00 0.00 22818.78 5064.15 47662.55 00:09:15.606 =================================================================================================================== 00:09:15.606 Total : 5590.51 21.84 0.00 0.00 22818.78 5064.15 47662.55 00:09:15.606 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67599 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67601 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67604 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.865 rmmod nvme_tcp 00:09:15.865 rmmod nvme_fabrics 00:09:15.865 rmmod nvme_keyring 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 67557 ']' 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 67557 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 67557 ']' 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 67557 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67557 00:09:15.865 killing process with pid 67557 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67557' 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 67557 00:09:15.865 15:22:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 67557 00:09:16.123 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:16.123 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:16.123 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:16.123 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:16.123 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:16.124 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:16.382 00:09:16.382 real 0m3.445s 00:09:16.382 user 0m13.855s 00:09:16.382 sys 0m1.824s 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.382 ************************************ 00:09:16.382 END TEST nvmf_bdev_io_wait 00:09:16.382 ************************************ 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.382 ************************************ 00:09:16.382 START TEST nvmf_queue_depth 00:09:16.382 ************************************ 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.382 * Looking for test storage... 00:09:16.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:16.382 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.641 --rc genhtml_branch_coverage=1 00:09:16.641 --rc genhtml_function_coverage=1 00:09:16.641 --rc genhtml_legend=1 00:09:16.641 --rc geninfo_all_blocks=1 00:09:16.641 --rc geninfo_unexecuted_blocks=1 00:09:16.641 00:09:16.641 ' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.641 --rc genhtml_branch_coverage=1 00:09:16.641 --rc genhtml_function_coverage=1 00:09:16.641 --rc genhtml_legend=1 00:09:16.641 --rc geninfo_all_blocks=1 00:09:16.641 --rc geninfo_unexecuted_blocks=1 00:09:16.641 00:09:16.641 ' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.641 --rc genhtml_branch_coverage=1 00:09:16.641 --rc genhtml_function_coverage=1 00:09:16.641 --rc genhtml_legend=1 00:09:16.641 --rc geninfo_all_blocks=1 00:09:16.641 --rc geninfo_unexecuted_blocks=1 00:09:16.641 00:09:16.641 ' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.641 --rc genhtml_branch_coverage=1 00:09:16.641 --rc genhtml_function_coverage=1 00:09:16.641 --rc genhtml_legend=1 00:09:16.641 --rc geninfo_all_blocks=1 00:09:16.641 --rc geninfo_unexecuted_blocks=1 00:09:16.641 00:09:16.641 ' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.641 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.642 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:16.642 Cannot find device "nvmf_init_br" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:16.642 Cannot find device "nvmf_init_br2" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:16.642 Cannot find device "nvmf_tgt_br" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.642 Cannot find device "nvmf_tgt_br2" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:16.642 Cannot find device "nvmf_init_br" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:16.642 Cannot find device "nvmf_init_br2" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:16.642 Cannot find device "nvmf_tgt_br" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:16.642 Cannot find device "nvmf_tgt_br2" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.642 Cannot find device "nvmf_br" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.642 Cannot find device "nvmf_init_if" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.642 Cannot find device "nvmf_init_if2" 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.642 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:16.901 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:16.902 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:16.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:09:16.902 00:09:16.902 --- 10.0.0.3 ping statistics --- 00:09:16.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.902 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:16.902 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:16.902 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:09:16.902 00:09:16.902 --- 10.0.0.4 ping statistics --- 00:09:16.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.902 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:16.902 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:17.161 00:09:17.161 --- 10.0.0.1 ping statistics --- 00:09:17.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.161 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:17.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:09:17.161 00:09:17.161 --- 10.0.0.2 ping statistics --- 00:09:17.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.161 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=67863 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 67863 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67863 ']' 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.161 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.161 [2024-10-01 15:22:16.188737] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:17.161 [2024-10-01 15:22:16.188862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.420 [2024-10-01 15:22:16.336486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.420 [2024-10-01 15:22:16.412625] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.420 [2024-10-01 15:22:16.412690] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.420 [2024-10-01 15:22:16.412705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.420 [2024-10-01 15:22:16.412715] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.420 [2024-10-01 15:22:16.412725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.420 [2024-10-01 15:22:16.412759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.420 [2024-10-01 15:22:16.549160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.420 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.420 Malloc0 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.680 [2024-10-01 15:22:16.607592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67898 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67898 /var/tmp/bdevperf.sock 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67898 ']' 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.680 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.680 [2024-10-01 15:22:16.685206] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:17.680 [2024-10-01 15:22:16.685301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67898 ] 00:09:17.680 [2024-10-01 15:22:16.827392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.938 [2024-10-01 15:22:16.886263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.938 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.938 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:17.938 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:17.938 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.938 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.938 NVMe0n1 00:09:17.938 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.938 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:18.197 Running I/O for 10 seconds... 00:09:28.111 7173.00 IOPS, 28.02 MiB/s 7674.50 IOPS, 29.98 MiB/s 7850.67 IOPS, 30.67 MiB/s 7938.75 IOPS, 31.01 MiB/s 8008.00 IOPS, 31.28 MiB/s 8038.50 IOPS, 31.40 MiB/s 8044.43 IOPS, 31.42 MiB/s 8105.50 IOPS, 31.66 MiB/s 8176.11 IOPS, 31.94 MiB/s 8197.90 IOPS, 32.02 MiB/s 00:09:28.111 Latency(us) 00:09:28.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.111 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:28.111 Verification LBA range: start 0x0 length 0x4000 00:09:28.111 NVMe0n1 : 10.07 8235.60 32.17 0.00 0.00 123800.81 21924.77 116296.61 00:09:28.111 =================================================================================================================== 00:09:28.111 Total : 8235.60 32.17 0.00 0.00 123800.81 21924.77 116296.61 00:09:28.111 { 00:09:28.111 "results": [ 00:09:28.111 { 00:09:28.111 "job": "NVMe0n1", 00:09:28.111 "core_mask": "0x1", 00:09:28.111 "workload": "verify", 00:09:28.111 "status": "finished", 00:09:28.111 "verify_range": { 00:09:28.111 "start": 0, 00:09:28.111 "length": 16384 00:09:28.111 }, 00:09:28.111 "queue_depth": 1024, 00:09:28.111 "io_size": 4096, 00:09:28.111 "runtime": 10.072246, 00:09:28.111 "iops": 8235.601076462986, 00:09:28.111 "mibps": 32.17031670493354, 00:09:28.111 "io_failed": 0, 00:09:28.111 "io_timeout": 0, 00:09:28.111 "avg_latency_us": 123800.80938209961, 00:09:28.111 "min_latency_us": 21924.77090909091, 00:09:28.111 "max_latency_us": 116296.61090909092 00:09:28.111 } 00:09:28.111 ], 00:09:28.111 "core_count": 1 00:09:28.111 } 00:09:28.369 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67898 00:09:28.369 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67898 ']' 00:09:28.369 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67898 00:09:28.369 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:28.369 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.369 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67898 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.370 killing process with pid 67898 00:09:28.370 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.370 00:09:28.370 Latency(us) 00:09:28.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.370 =================================================================================================================== 00:09:28.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67898' 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67898 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67898 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:28.370 15:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.972 rmmod nvme_tcp 00:09:30.972 rmmod nvme_fabrics 00:09:30.972 rmmod nvme_keyring 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 67863 ']' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 67863 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67863 ']' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67863 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67863 00:09:30.972 killing process with pid 67863 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67863' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67863 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67863 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:30.972 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.973 15:22:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:30.973 00:09:30.973 real 0m14.628s 00:09:30.973 user 0m23.292s 00:09:30.973 sys 0m2.067s 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.973 ************************************ 00:09:30.973 END TEST nvmf_queue_depth 00:09:30.973 ************************************ 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.973 ************************************ 00:09:30.973 START TEST nvmf_target_multipath 00:09:30.973 ************************************ 00:09:30.973 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.233 * Looking for test storage... 00:09:31.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.233 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:31.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.234 --rc genhtml_branch_coverage=1 00:09:31.234 --rc genhtml_function_coverage=1 00:09:31.234 --rc genhtml_legend=1 00:09:31.234 --rc geninfo_all_blocks=1 00:09:31.234 --rc geninfo_unexecuted_blocks=1 00:09:31.234 00:09:31.234 ' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:31.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.234 --rc genhtml_branch_coverage=1 00:09:31.234 --rc genhtml_function_coverage=1 00:09:31.234 --rc genhtml_legend=1 00:09:31.234 --rc geninfo_all_blocks=1 00:09:31.234 --rc geninfo_unexecuted_blocks=1 00:09:31.234 00:09:31.234 ' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:31.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.234 --rc genhtml_branch_coverage=1 00:09:31.234 --rc genhtml_function_coverage=1 00:09:31.234 --rc genhtml_legend=1 00:09:31.234 --rc geninfo_all_blocks=1 00:09:31.234 --rc geninfo_unexecuted_blocks=1 00:09:31.234 00:09:31.234 ' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:31.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.234 --rc genhtml_branch_coverage=1 00:09:31.234 --rc genhtml_function_coverage=1 00:09:31.234 --rc genhtml_legend=1 00:09:31.234 --rc geninfo_all_blocks=1 00:09:31.234 --rc geninfo_unexecuted_blocks=1 00:09:31.234 00:09:31.234 ' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:31.234 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:31.235 Cannot find device "nvmf_init_br" 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:31.235 Cannot find device "nvmf_init_br2" 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:31.235 Cannot find device "nvmf_tgt_br" 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.235 Cannot find device "nvmf_tgt_br2" 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:31.235 Cannot find device "nvmf_init_br" 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:31.235 Cannot find device "nvmf_init_br2" 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:31.235 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:31.494 Cannot find device "nvmf_tgt_br" 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:31.494 Cannot find device "nvmf_tgt_br2" 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:31.494 Cannot find device "nvmf_br" 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:31.494 Cannot find device "nvmf_init_if" 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:31.494 Cannot find device "nvmf_init_if2" 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.494 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:09:31.754 00:09:31.754 --- 10.0.0.3 ping statistics --- 00:09:31.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.754 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:09:31.754 00:09:31.754 --- 10.0.0.4 ping statistics --- 00:09:31.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.754 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:31.754 00:09:31.754 --- 10.0.0.1 ping statistics --- 00:09:31.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.754 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:09:31.754 00:09:31.754 --- 10.0.0.2 ping statistics --- 00:09:31.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.754 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=68291 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 68291 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 68291 ']' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.754 15:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.754 [2024-10-01 15:22:30.811842] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:31.754 [2024-10-01 15:22:30.811938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.013 [2024-10-01 15:22:30.945726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.013 [2024-10-01 15:22:31.009966] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.013 [2024-10-01 15:22:31.010028] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.013 [2024-10-01 15:22:31.010052] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.013 [2024-10-01 15:22:31.010060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.013 [2024-10-01 15:22:31.010067] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.013 [2024-10-01 15:22:31.010688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.013 [2024-10-01 15:22:31.010792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.013 [2024-10-01 15:22:31.010741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.013 [2024-10-01 15:22:31.010801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.013 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.579 [2024-10-01 15:22:31.493211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.579 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:32.837 Malloc0 00:09:32.837 15:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:33.096 15:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.663 15:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.663 [2024-10-01 15:22:32.797230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.663 15:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:33.921 [2024-10-01 15:22:33.081504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:34.179 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:34.179 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:34.482 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.482 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.482 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.482 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.482 15:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:36.382 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:36.383 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:36.383 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:36.383 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68422 00:09:36.383 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:36.383 15:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:36.640 [global] 00:09:36.640 thread=1 00:09:36.640 invalidate=1 00:09:36.640 rw=randrw 00:09:36.640 time_based=1 00:09:36.640 runtime=6 00:09:36.640 ioengine=libaio 00:09:36.640 direct=1 00:09:36.640 bs=4096 00:09:36.640 iodepth=128 00:09:36.640 norandommap=0 00:09:36.640 numjobs=1 00:09:36.640 00:09:36.640 verify_dump=1 00:09:36.640 verify_backlog=512 00:09:36.640 verify_state_save=0 00:09:36.640 do_verify=1 00:09:36.640 verify=crc32c-intel 00:09:36.640 [job0] 00:09:36.640 filename=/dev/nvme0n1 00:09:36.640 Could not set queue depth (nvme0n1) 00:09:36.640 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.640 fio-3.35 00:09:36.640 Starting 1 thread 00:09:37.574 15:22:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:37.832 15:22:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:38.090 15:22:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:39.022 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:39.022 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:39.022 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:39.022 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:39.280 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:39.844 15:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:40.775 15:22:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:40.775 15:22:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:40.775 15:22:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:40.775 15:22:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68422 00:09:43.302 00:09:43.302 job0: (groupid=0, jobs=1): err= 0: pid=68443: Tue Oct 1 15:22:41 2024 00:09:43.302 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(236MiB/6007msec) 00:09:43.302 slat (usec): min=2, max=7802, avg=57.04, stdev=259.06 00:09:43.302 clat (usec): min=728, max=18768, avg=8700.36, stdev=1606.00 00:09:43.302 lat (usec): min=792, max=18786, avg=8757.40, stdev=1618.34 00:09:43.302 clat percentiles (usec): 00:09:43.302 | 1.00th=[ 4752], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 7635], 00:09:43.302 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:09:43.302 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11600], 00:09:43.302 | 99.00th=[13566], 99.50th=[14615], 99.90th=[16581], 99.95th=[18220], 00:09:43.302 | 99.99th=[18482] 00:09:43.302 bw ( KiB/s): min= 7520, max=26304, per=51.56%, avg=20767.73, stdev=6549.04, samples=11 00:09:43.302 iops : min= 1880, max= 6576, avg=5191.91, stdev=1637.25, samples=11 00:09:43.302 write: IOPS=5809, BW=22.7MiB/s (23.8MB/s)(122MiB/5388msec); 0 zone resets 00:09:43.302 slat (usec): min=12, max=2144, avg=69.13, stdev=163.94 00:09:43.302 clat (usec): min=601, max=17517, avg=7520.27, stdev=1386.15 00:09:43.302 lat (usec): min=675, max=17565, avg=7589.40, stdev=1392.64 00:09:43.302 clat percentiles (usec): 00:09:43.302 | 1.00th=[ 3654], 5.00th=[ 5211], 10.00th=[ 6063], 20.00th=[ 6652], 00:09:43.302 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7701], 00:09:43.302 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[ 9765], 00:09:43.302 | 99.00th=[11207], 99.50th=[12125], 99.90th=[15139], 99.95th=[15533], 00:09:43.302 | 99.99th=[16581] 00:09:43.302 bw ( KiB/s): min= 7504, max=25984, per=89.66%, avg=20836.27, stdev=6493.01, samples=11 00:09:43.302 iops : min= 1876, max= 6496, avg=5209.00, stdev=1623.24, samples=11 00:09:43.302 lat (usec) : 750=0.01%, 1000=0.01% 00:09:43.302 lat (msec) : 2=0.08%, 4=0.57%, 10=87.06%, 20=12.27% 00:09:43.302 cpu : usr=5.98%, sys=24.58%, ctx=5993, majf=0, minf=90 00:09:43.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:43.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.302 issued rwts: total=60488,31303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.302 00:09:43.302 Run status group 0 (all jobs): 00:09:43.302 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=236MiB (248MB), run=6007-6007msec 00:09:43.302 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=122MiB (128MB), run=5388-5388msec 00:09:43.302 00:09:43.302 Disk stats (read/write): 00:09:43.302 nvme0n1: ios=59632/30694, merge=0/0, ticks=486005/215143, in_queue=701148, util=98.62% 00:09:43.302 15:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:43.302 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:09:43.560 15:22:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68582 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:44.492 15:22:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:44.492 [global] 00:09:44.492 thread=1 00:09:44.492 invalidate=1 00:09:44.492 rw=randrw 00:09:44.492 time_based=1 00:09:44.492 runtime=6 00:09:44.492 ioengine=libaio 00:09:44.492 direct=1 00:09:44.492 bs=4096 00:09:44.492 iodepth=128 00:09:44.492 norandommap=0 00:09:44.492 numjobs=1 00:09:44.492 00:09:44.492 verify_dump=1 00:09:44.492 verify_backlog=512 00:09:44.492 verify_state_save=0 00:09:44.492 do_verify=1 00:09:44.492 verify=crc32c-intel 00:09:44.492 [job0] 00:09:44.492 filename=/dev/nvme0n1 00:09:44.492 Could not set queue depth (nvme0n1) 00:09:44.750 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.750 fio-3.35 00:09:44.750 Starting 1 thread 00:09:45.683 15:22:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:45.940 15:22:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:46.506 15:22:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:47.438 15:22:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:47.438 15:22:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:47.438 15:22:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:47.438 15:22:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:48.003 15:22:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:48.262 15:22:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:49.637 15:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:49.637 15:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:49.637 15:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:49.637 15:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68582 00:09:51.013 00:09:51.013 job0: (groupid=0, jobs=1): err= 0: pid=68604: Tue Oct 1 15:22:49 2024 00:09:51.013 read: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(263MiB/6004msec) 00:09:51.013 slat (usec): min=4, max=6309, avg=43.78, stdev=212.02 00:09:51.013 clat (usec): min=199, max=22644, avg=7806.20, stdev=2743.22 00:09:51.013 lat (usec): min=224, max=22665, avg=7849.98, stdev=2756.27 00:09:51.013 clat percentiles (usec): 00:09:51.013 | 1.00th=[ 840], 5.00th=[ 1729], 10.00th=[ 4015], 20.00th=[ 6194], 00:09:51.013 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8455], 00:09:51.013 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10945], 95.00th=[12125], 00:09:51.013 | 99.00th=[14091], 99.50th=[15533], 99.90th=[17957], 99.95th=[19530], 00:09:51.013 | 99.99th=[21627] 00:09:51.013 bw ( KiB/s): min= 6752, max=36472, per=53.86%, avg=24172.36, stdev=8609.94, samples=11 00:09:51.013 iops : min= 1688, max= 9118, avg=6043.09, stdev=2152.48, samples=11 00:09:51.013 write: IOPS=6773, BW=26.5MiB/s (27.7MB/s)(140MiB/5291msec); 0 zone resets 00:09:51.013 slat (usec): min=8, max=2946, avg=56.54, stdev=134.70 00:09:51.013 clat (usec): min=180, max=21164, avg=6599.90, stdev=2705.37 00:09:51.013 lat (usec): min=208, max=21317, avg=6656.44, stdev=2714.76 00:09:51.013 clat percentiles (usec): 00:09:51.013 | 1.00th=[ 586], 5.00th=[ 1004], 10.00th=[ 2278], 20.00th=[ 4359], 00:09:51.013 | 30.00th=[ 5932], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7439], 00:09:51.013 | 70.00th=[ 7832], 80.00th=[ 8455], 90.00th=[ 9765], 95.00th=[10683], 00:09:51.013 | 99.00th=[12387], 99.50th=[13435], 99.90th=[15795], 99.95th=[16909], 00:09:51.013 | 99.99th=[19006] 00:09:51.013 bw ( KiB/s): min= 7312, max=35600, per=89.33%, avg=24203.64, stdev=8387.73, samples=11 00:09:51.013 iops : min= 1828, max= 8900, avg=6050.91, stdev=2096.93, samples=11 00:09:51.013 lat (usec) : 250=0.01%, 500=0.30%, 750=0.96%, 1000=1.51% 00:09:51.013 lat (msec) : 2=4.22%, 4=5.52%, 10=73.88%, 20=13.58%, 50=0.02% 00:09:51.013 cpu : usr=6.33%, sys=26.97%, ctx=8411, majf=0, minf=90 00:09:51.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.013 issued rwts: total=67360,35840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.013 00:09:51.013 Run status group 0 (all jobs): 00:09:51.013 READ: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=263MiB (276MB), run=6004-6004msec 00:09:51.013 WRITE: bw=26.5MiB/s (27.7MB/s), 26.5MiB/s-26.5MiB/s (27.7MB/s-27.7MB/s), io=140MiB (147MB), run=5291-5291msec 00:09:51.013 00:09:51.013 Disk stats (read/write): 00:09:51.013 nvme0n1: ios=66535/35328, merge=0/0, ticks=484329/215165, in_queue=699494, util=98.58% 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:51.013 15:22:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.271 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:51.271 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:51.271 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:51.271 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:51.271 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:51.271 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.579 rmmod nvme_tcp 00:09:51.579 rmmod nvme_fabrics 00:09:51.579 rmmod nvme_keyring 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 68291 ']' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 68291 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 68291 ']' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 68291 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68291 00:09:51.579 killing process with pid 68291 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68291' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 68291 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 68291 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:51.579 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:51.838 00:09:51.838 real 0m20.822s 00:09:51.838 user 1m21.738s 00:09:51.838 sys 0m7.005s 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.838 ************************************ 00:09:51.838 END TEST nvmf_target_multipath 00:09:51.838 ************************************ 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.838 ************************************ 00:09:51.838 START TEST nvmf_zcopy 00:09:51.838 ************************************ 00:09:51.838 15:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.097 * Looking for test storage... 00:09:52.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.097 --rc genhtml_branch_coverage=1 00:09:52.097 --rc genhtml_function_coverage=1 00:09:52.097 --rc genhtml_legend=1 00:09:52.097 --rc geninfo_all_blocks=1 00:09:52.097 --rc geninfo_unexecuted_blocks=1 00:09:52.097 00:09:52.097 ' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.097 --rc genhtml_branch_coverage=1 00:09:52.097 --rc genhtml_function_coverage=1 00:09:52.097 --rc genhtml_legend=1 00:09:52.097 --rc geninfo_all_blocks=1 00:09:52.097 --rc geninfo_unexecuted_blocks=1 00:09:52.097 00:09:52.097 ' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.097 --rc genhtml_branch_coverage=1 00:09:52.097 --rc genhtml_function_coverage=1 00:09:52.097 --rc genhtml_legend=1 00:09:52.097 --rc geninfo_all_blocks=1 00:09:52.097 --rc geninfo_unexecuted_blocks=1 00:09:52.097 00:09:52.097 ' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:52.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.097 --rc genhtml_branch_coverage=1 00:09:52.097 --rc genhtml_function_coverage=1 00:09:52.097 --rc genhtml_legend=1 00:09:52.097 --rc geninfo_all_blocks=1 00:09:52.097 --rc geninfo_unexecuted_blocks=1 00:09:52.097 00:09:52.097 ' 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:52.097 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.098 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:52.098 Cannot find device "nvmf_init_br" 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:52.098 Cannot find device "nvmf_init_br2" 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:52.098 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:52.357 Cannot find device "nvmf_tgt_br" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.357 Cannot find device "nvmf_tgt_br2" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:52.357 Cannot find device "nvmf_init_br" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:52.357 Cannot find device "nvmf_init_br2" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:52.357 Cannot find device "nvmf_tgt_br" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:52.357 Cannot find device "nvmf_tgt_br2" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:52.357 Cannot find device "nvmf_br" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:52.357 Cannot find device "nvmf_init_if" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:52.357 Cannot find device "nvmf_init_if2" 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:52.357 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:52.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:09:52.615 00:09:52.615 --- 10.0.0.3 ping statistics --- 00:09:52.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.615 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:09:52.615 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:52.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:52.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:09:52.615 00:09:52.616 --- 10.0.0.4 ping statistics --- 00:09:52.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.616 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:52.616 00:09:52.616 --- 10.0.0.1 ping statistics --- 00:09:52.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.616 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:52.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:09:52.616 00:09:52.616 --- 10.0.0.2 ping statistics --- 00:09:52.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.616 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=68936 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 68936 00:09:52.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 68936 ']' 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.616 15:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.616 [2024-10-01 15:22:51.728804] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:52.616 [2024-10-01 15:22:51.729156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.875 [2024-10-01 15:22:51.870728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.875 [2024-10-01 15:22:51.930275] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.875 [2024-10-01 15:22:51.930550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.875 [2024-10-01 15:22:51.930572] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.875 [2024-10-01 15:22:51.930580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.875 [2024-10-01 15:22:51.930589] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.875 [2024-10-01 15:22:51.930620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.875 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.133 [2024-10-01 15:22:52.051517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.133 [2024-10-01 15:22:52.067675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.133 malloc0 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.133 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:53.134 { 00:09:53.134 "params": { 00:09:53.134 "name": "Nvme$subsystem", 00:09:53.134 "trtype": "$TEST_TRANSPORT", 00:09:53.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.134 "adrfam": "ipv4", 00:09:53.134 "trsvcid": "$NVMF_PORT", 00:09:53.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.134 "hdgst": ${hdgst:-false}, 00:09:53.134 "ddgst": ${ddgst:-false} 00:09:53.134 }, 00:09:53.134 "method": "bdev_nvme_attach_controller" 00:09:53.134 } 00:09:53.134 EOF 00:09:53.134 )") 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:53.134 15:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:53.134 "params": { 00:09:53.134 "name": "Nvme1", 00:09:53.134 "trtype": "tcp", 00:09:53.134 "traddr": "10.0.0.3", 00:09:53.134 "adrfam": "ipv4", 00:09:53.134 "trsvcid": "4420", 00:09:53.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.134 "hdgst": false, 00:09:53.134 "ddgst": false 00:09:53.134 }, 00:09:53.134 "method": "bdev_nvme_attach_controller" 00:09:53.134 }' 00:09:53.134 [2024-10-01 15:22:52.183955] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:09:53.134 [2024-10-01 15:22:52.184093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68968 ] 00:09:53.392 [2024-10-01 15:22:52.324219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.392 [2024-10-01 15:22:52.411840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.392 Running I/O for 10 seconds... 00:10:03.664 5820.00 IOPS, 45.47 MiB/s 5613.50 IOPS, 43.86 MiB/s 5676.33 IOPS, 44.35 MiB/s 5711.75 IOPS, 44.62 MiB/s 5730.00 IOPS, 44.77 MiB/s 5648.83 IOPS, 44.13 MiB/s 5650.57 IOPS, 44.15 MiB/s 5659.12 IOPS, 44.21 MiB/s 5676.11 IOPS, 44.34 MiB/s 5646.60 IOPS, 44.11 MiB/s 00:10:03.664 Latency(us) 00:10:03.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.664 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:03.664 Verification LBA range: start 0x0 length 0x1000 00:10:03.664 Nvme1n1 : 10.01 5651.09 44.15 0.00 0.00 22578.57 3127.85 31218.97 00:10:03.664 =================================================================================================================== 00:10:03.664 Total : 5651.09 44.15 0.00 0.00 22578.57 3127.85 31218.97 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69085 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.664 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.664 { 00:10:03.664 "params": { 00:10:03.664 "name": "Nvme$subsystem", 00:10:03.664 "trtype": "$TEST_TRANSPORT", 00:10:03.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.664 "adrfam": "ipv4", 00:10:03.664 "trsvcid": "$NVMF_PORT", 00:10:03.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.664 "hdgst": ${hdgst:-false}, 00:10:03.664 "ddgst": ${ddgst:-false} 00:10:03.664 }, 00:10:03.665 "method": "bdev_nvme_attach_controller" 00:10:03.665 } 00:10:03.665 EOF 00:10:03.665 )") 00:10:03.665 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:03.665 [2024-10-01 15:23:02.755815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.755883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.665 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:03.665 15:23:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.665 "params": { 00:10:03.665 "name": "Nvme1", 00:10:03.665 "trtype": "tcp", 00:10:03.665 "traddr": "10.0.0.3", 00:10:03.665 "adrfam": "ipv4", 00:10:03.665 "trsvcid": "4420", 00:10:03.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.665 "hdgst": false, 00:10:03.665 "ddgst": false 00:10:03.665 }, 00:10:03.665 "method": "bdev_nvme_attach_controller" 00:10:03.665 }' 00:10:03.665 [2024-10-01 15:23:02.767745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.767809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.665 [2024-10-01 15:23:02.779741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.779812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.665 [2024-10-01 15:23:02.791723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.791782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.665 [2024-10-01 15:23:02.803700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.803753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.665 [2024-10-01 15:23:02.815722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.815767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.665 [2024-10-01 15:23:02.827713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.665 [2024-10-01 15:23:02.827757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.665 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.839751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.839829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 [2024-10-01 15:23:02.841264] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:10:03.925 [2024-10-01 15:23:02.841398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69085 ] 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.851754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.851819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.863706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.863750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.871704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.871749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.883730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.883781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.895807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.895887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.907810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.907897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.919817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.919900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.931816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.931891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.943786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.943858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.955818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.955894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.925 [2024-10-01 15:23:02.967822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.925 [2024-10-01 15:23:02.967895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.925 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:02.979803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:02.979874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 [2024-10-01 15:23:02.982591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.926 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:02.991789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:02.991855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.003779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.003836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.015853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.015942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.027807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.027869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.039804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.039868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.051776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.051835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.063806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.063865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.070239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.926 [2024-10-01 15:23:03.075836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.075899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:03.926 [2024-10-01 15:23:03.087880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.926 [2024-10-01 15:23:03.087981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.926 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.099889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.099974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.107862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.107934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.115846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.115913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.127882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.127960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.139810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.139859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.152123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.152174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.164118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.164167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.176122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.176167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.188125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.188168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.185 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.185 [2024-10-01 15:23:03.200129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.185 [2024-10-01 15:23:03.200172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.212173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.212222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.224140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.224187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 Running I/O for 5 seconds... 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.243095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.243163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.257466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.257528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.274052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.274118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.290030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.290091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.306936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.306998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.323447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.323509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.186 [2024-10-01 15:23:03.339889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.186 [2024-10-01 15:23:03.339959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.186 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.356687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.356755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.372839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.372924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.389147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.389215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.406260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.406327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.422518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.422594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.440871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.440944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.455946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.456010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.472709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.472778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.488551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.488631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.505498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.505566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.522493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.522558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.539939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.540027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.557381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.557489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.571577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.571654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.590749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.590836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.445 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.445 [2024-10-01 15:23:03.608736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.445 [2024-10-01 15:23:03.608823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.626105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.626193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.643708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.643792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.661973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.662062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.679877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.679960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.693877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.693959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.711505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.711589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.704 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.704 [2024-10-01 15:23:03.729198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.704 [2024-10-01 15:23:03.729259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.745070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.745130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.762729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.762800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.778890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.778947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.794033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.794085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.811961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.812046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.830728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.830793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.852907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.852980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.705 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.705 [2024-10-01 15:23:03.869249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.705 [2024-10-01 15:23:03.869309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.879835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.879891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.894485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.894547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.912520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.912593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.928059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.928116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.938283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.938349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.952934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.952994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.963131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.963186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.978808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.978892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:03.995751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:03.995841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:04.010030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.964 [2024-10-01 15:23:04.010111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.964 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.964 [2024-10-01 15:23:04.026164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.026234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.965 [2024-10-01 15:23:04.042310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.042383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.965 [2024-10-01 15:23:04.058368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.058457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.965 [2024-10-01 15:23:04.068183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.068265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.965 [2024-10-01 15:23:04.082961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.083038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.965 [2024-10-01 15:23:04.098022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.098086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:04.965 [2024-10-01 15:23:04.115433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.965 [2024-10-01 15:23:04.115517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.965 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.225 [2024-10-01 15:23:04.133271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.133341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.148121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.148195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.164714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.164780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.180920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.180995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.198029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.198093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.215187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.215262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 10282.00 IOPS, 80.33 MiB/s [2024-10-01 15:23:04.232067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.232138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.248595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.248670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.265288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.265376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.281482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.281574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.298680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.298776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.313947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.314027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.332379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.332474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.348232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.348307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.363602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.363680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.226 [2024-10-01 15:23:04.379866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.226 [2024-10-01 15:23:04.379943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.226 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.397381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.397474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.412535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.412617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.427936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.428010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.446189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.446263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.462010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.462081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.476921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.476994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.493347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.493436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.509617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.509690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.527350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.527450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.543619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.543691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.558485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.558552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.576304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.576376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.591367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.591450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.601780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.601846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.508 [2024-10-01 15:23:04.616854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.508 [2024-10-01 15:23:04.616931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.508 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.509 [2024-10-01 15:23:04.634079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.509 [2024-10-01 15:23:04.634152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.509 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.509 [2024-10-01 15:23:04.649956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.509 [2024-10-01 15:23:04.650038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.509 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.509 [2024-10-01 15:23:04.661153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.509 [2024-10-01 15:23:04.661225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.509 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.676635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.676716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.692485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.692562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.709631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.709712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.724739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.724813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.735114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.735182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.746529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.746595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.762491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.762588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.779772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.779844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.795408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.795491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.806149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.806216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.821447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.821540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.838728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.838826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.854650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.854706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.870391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.870469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.888790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.888859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.903627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.903686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.913891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.913950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:05.768 [2024-10-01 15:23:04.928091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.768 [2024-10-01 15:23:04.928152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.768 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.027 [2024-10-01 15:23:04.945198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.027 [2024-10-01 15:23:04.945262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.027 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.027 [2024-10-01 15:23:04.959697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.027 [2024-10-01 15:23:04.959757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.027 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.027 [2024-10-01 15:23:04.977467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.027 [2024-10-01 15:23:04.977528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.027 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.027 [2024-10-01 15:23:04.992711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.027 [2024-10-01 15:23:04.992774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.027 2024/10/01 15:23:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.027 [2024-10-01 15:23:05.009762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.027 [2024-10-01 15:23:05.009821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.027 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.027 [2024-10-01 15:23:05.025870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.027 [2024-10-01 15:23:05.025928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.027 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.042068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.042129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.058205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.058265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.076536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.076610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.091571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.091634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.102071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.102125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.117049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.117113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.133480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.133539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.150796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.150859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.166865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.166923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.177205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.177254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.028 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.028 [2024-10-01 15:23:05.191700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.028 [2024-10-01 15:23:05.191759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.209101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.209163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.225066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.225127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 10750.00 IOPS, 83.98 MiB/s 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.242673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.242740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.258403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.258478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.275090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.275154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.292554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.292629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.308947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.309028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.326136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.326200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.343026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.343090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.359030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.359089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.375961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.376021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.393175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.393243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.408195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.408256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.426445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.426510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.287 [2024-10-01 15:23:05.442438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.287 [2024-10-01 15:23:05.442499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.287 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.459242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.459310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.475581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.475664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.491236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.491307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.502008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.502072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.514065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.514123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.529916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.529979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.546738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.546804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.561483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.561544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.578120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.578185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.546 [2024-10-01 15:23:05.594906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.546 [2024-10-01 15:23:05.594972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.546 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.612030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.612096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.627456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.627518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.638137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.638201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.653078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.653143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.669965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.670028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.687016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.687085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.702693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.702756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.547 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.547 [2024-10-01 15:23:05.713240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.547 [2024-10-01 15:23:05.713293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.727758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.727816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.744685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.744747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.763108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.763174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.778720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.778784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.793659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.793722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.810728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.810793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.826207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.826266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.841847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.841915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.851709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.851761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.865858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.865930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.876347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.876398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.891832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.891893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.909176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.909262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.924148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.924228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.940657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.940747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.956995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.957069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.806 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:06.806 [2024-10-01 15:23:05.973409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.806 [2024-10-01 15:23:05.973484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:05.990406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:05.990478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.007211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.007270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.023038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.023100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.041378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.041452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.056775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.056831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.073673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.073744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.089154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.089210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.098457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.098501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.113555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.113608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.065 [2024-10-01 15:23:06.123990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.065 [2024-10-01 15:23:06.124043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.065 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.066 [2024-10-01 15:23:06.138826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.066 [2024-10-01 15:23:06.138878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.066 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.066 [2024-10-01 15:23:06.155357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.066 [2024-10-01 15:23:06.155410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.066 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.066 [2024-10-01 15:23:06.172641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.066 [2024-10-01 15:23:06.172694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.066 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.066 [2024-10-01 15:23:06.188303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.066 [2024-10-01 15:23:06.188354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.066 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.066 [2024-10-01 15:23:06.206108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.066 [2024-10-01 15:23:06.206170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.066 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.066 [2024-10-01 15:23:06.222726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.066 [2024-10-01 15:23:06.222792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.066 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.324 10974.67 IOPS, 85.74 MiB/s [2024-10-01 15:23:06.238459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.324 [2024-10-01 15:23:06.238516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.324 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.324 [2024-10-01 15:23:06.255694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.324 [2024-10-01 15:23:06.255748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.324 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.324 [2024-10-01 15:23:06.272745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.324 [2024-10-01 15:23:06.272799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.288469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.288527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.305684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.305753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.321646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.321699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.337901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.337954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.354759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.354811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.371873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.371930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.387827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.387884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.404648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.404705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.421005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.421062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.437115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.437182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.454860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.454951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.472896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.472976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.325 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.325 [2024-10-01 15:23:06.489339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.325 [2024-10-01 15:23:06.489442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.502966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.503041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.522123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.522212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.537796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.537888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.555766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.555843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.574866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.574947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.588696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.588777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.604079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.604157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.621702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.621779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.584 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.584 [2024-10-01 15:23:06.640997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.584 [2024-10-01 15:23:06.641093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.585 [2024-10-01 15:23:06.658083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.585 [2024-10-01 15:23:06.658161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.585 [2024-10-01 15:23:06.672857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.585 [2024-10-01 15:23:06.672947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.585 [2024-10-01 15:23:06.688413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.585 [2024-10-01 15:23:06.688523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.585 [2024-10-01 15:23:06.706921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.585 [2024-10-01 15:23:06.707016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.585 [2024-10-01 15:23:06.724990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.585 [2024-10-01 15:23:06.725080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.585 [2024-10-01 15:23:06.738837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.585 [2024-10-01 15:23:06.738920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.585 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.759027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.759128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.777032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.777121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.794022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.794113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.812694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.812810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.827227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.827333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.843231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.843344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.858942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.859024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.868307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.868368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.883496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.883566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.901070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.901147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.919358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.919449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.936838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.936901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.844 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.844 [2024-10-01 15:23:06.951562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.844 [2024-10-01 15:23:06.951624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.845 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.845 [2024-10-01 15:23:06.969172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.845 [2024-10-01 15:23:06.969243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.845 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.845 [2024-10-01 15:23:06.984399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.845 [2024-10-01 15:23:06.984484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.845 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.845 [2024-10-01 15:23:06.994135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.845 [2024-10-01 15:23:06.994200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.845 2024/10/01 15:23:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:07.845 [2024-10-01 15:23:07.009551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.845 [2024-10-01 15:23:07.009613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.027550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.027615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.043018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.043079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.061965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.062029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.077464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.077527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.094583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.094647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.110373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.110456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.129371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.129458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.144748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.144826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.162482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.162556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.177385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.177463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.193708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.193779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.210679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.210756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.103 [2024-10-01 15:23:07.227051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.103 [2024-10-01 15:23:07.227120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.103 10776.50 IOPS, 84.19 MiB/s 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.104 [2024-10-01 15:23:07.244695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.104 [2024-10-01 15:23:07.244765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.104 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.104 [2024-10-01 15:23:07.260699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.104 [2024-10-01 15:23:07.260766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.104 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.277405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.277486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.293383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.293468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.310332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.310396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.326489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.326553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.343642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.343707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.358168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.358235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.376041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.376111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.390605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.390671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.407415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.407494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.423990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.424053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.440229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.440302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.455370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.455449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.472885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.472953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.488535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.488606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.506225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.506296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.362 [2024-10-01 15:23:07.521629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.362 [2024-10-01 15:23:07.521677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.362 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.621 [2024-10-01 15:23:07.538262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-10-01 15:23:07.538314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.621 [2024-10-01 15:23:07.553655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-10-01 15:23:07.553710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.621 [2024-10-01 15:23:07.570981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-10-01 15:23:07.571041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.586072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.586129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.604040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.604099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.619528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.619579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.636453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.636514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.652399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.652469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.670195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.670255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.685983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.686051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.695282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.695336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.711414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.711491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.727137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.727199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.744233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.744295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.760277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.760342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.622 [2024-10-01 15:23:07.777148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.622 [2024-10-01 15:23:07.777217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.622 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.794151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.794214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.811659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.811715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.827118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.827176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.846147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.846205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.861182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.861234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.878573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.878631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.894347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.894409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.913581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.913650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.929571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.929634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.948399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.948474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.963729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.963796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.980759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.980840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:07.998309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:07.998374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:08.013793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:08.013855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:08.024328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:08.024390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:08.881 [2024-10-01 15:23:08.039089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-10-01 15:23:08.039154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.056457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.056554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.073211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.073288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.089396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.089490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.106511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.106586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.123034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.123128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.135295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.135373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.153067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.153156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.170245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.170331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.186315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.186397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.202241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.202326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.218887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.218976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 10821.80 IOPS, 84.55 MiB/s [2024-10-01 15:23:08.235571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.235659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 00:10:09.141 Latency(us) 00:10:09.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.141 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:09.141 Nvme1n1 : 5.01 10817.49 84.51 0.00 0.00 11813.82 4408.79 27882.59 00:10:09.141 =================================================================================================================== 00:10:09.141 Total : 10817.49 84.51 0.00 0.00 11813.82 4408.79 27882.59 00:10:09.141 [2024-10-01 15:23:08.245960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.246015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.257932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.257996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.269959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.270024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.282030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.282116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.294022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.294101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.141 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.141 [2024-10-01 15:23:08.306031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.141 [2024-10-01 15:23:08.306112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.318010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.318084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.330000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.330066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.341984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.342048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.353994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.354068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.365978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.366047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.377974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.378031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.389992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.390053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.401972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.402025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 [2024-10-01 15:23:08.413984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-10-01 15:23:08.414038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 2024/10/01 15:23:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:09.400 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69085) - No such process 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69085 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.400 delay0 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.400 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.401 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.401 15:23:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:09.659 [2024-10-01 15:23:08.616259] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:16.253 Initializing NVMe Controllers 00:10:16.253 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.253 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.253 Initialization complete. Launching workers. 00:10:16.253 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 976 00:10:16.253 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1263, failed to submit 33 00:10:16.253 success 1088, unsuccessful 175, failed 0 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.253 rmmod nvme_tcp 00:10:16.253 rmmod nvme_fabrics 00:10:16.253 rmmod nvme_keyring 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 68936 ']' 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 68936 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 68936 ']' 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 68936 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68936 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:16.253 killing process with pid 68936 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68936' 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 68936 00:10:16.253 15:23:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 68936 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:16.253 00:10:16.253 real 0m24.419s 00:10:16.253 user 0m39.164s 00:10:16.253 sys 0m6.521s 00:10:16.253 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.254 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.254 ************************************ 00:10:16.254 END TEST nvmf_zcopy 00:10:16.254 ************************************ 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.513 ************************************ 00:10:16.513 START TEST nvmf_nmic 00:10:16.513 ************************************ 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:16.513 * Looking for test storage... 00:10:16.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:16.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.513 --rc genhtml_branch_coverage=1 00:10:16.513 --rc genhtml_function_coverage=1 00:10:16.513 --rc genhtml_legend=1 00:10:16.513 --rc geninfo_all_blocks=1 00:10:16.513 --rc geninfo_unexecuted_blocks=1 00:10:16.513 00:10:16.513 ' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:16.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.513 --rc genhtml_branch_coverage=1 00:10:16.513 --rc genhtml_function_coverage=1 00:10:16.513 --rc genhtml_legend=1 00:10:16.513 --rc geninfo_all_blocks=1 00:10:16.513 --rc geninfo_unexecuted_blocks=1 00:10:16.513 00:10:16.513 ' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:16.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.513 --rc genhtml_branch_coverage=1 00:10:16.513 --rc genhtml_function_coverage=1 00:10:16.513 --rc genhtml_legend=1 00:10:16.513 --rc geninfo_all_blocks=1 00:10:16.513 --rc geninfo_unexecuted_blocks=1 00:10:16.513 00:10:16.513 ' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:16.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.513 --rc genhtml_branch_coverage=1 00:10:16.513 --rc genhtml_function_coverage=1 00:10:16.513 --rc genhtml_legend=1 00:10:16.513 --rc geninfo_all_blocks=1 00:10:16.513 --rc geninfo_unexecuted_blocks=1 00:10:16.513 00:10:16.513 ' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.513 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.514 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:16.514 Cannot find device "nvmf_init_br" 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:16.514 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:16.774 Cannot find device "nvmf_init_br2" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:16.774 Cannot find device "nvmf_tgt_br" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.774 Cannot find device "nvmf_tgt_br2" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:16.774 Cannot find device "nvmf_init_br" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:16.774 Cannot find device "nvmf_init_br2" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:16.774 Cannot find device "nvmf_tgt_br" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:16.774 Cannot find device "nvmf_tgt_br2" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:16.774 Cannot find device "nvmf_br" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:16.774 Cannot find device "nvmf_init_if" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:16.774 Cannot find device "nvmf_init_if2" 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.774 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:17.056 15:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:17.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:17.056 00:10:17.056 --- 10.0.0.3 ping statistics --- 00:10:17.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.056 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:17.056 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:17.056 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:17.056 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:10:17.056 00:10:17.056 --- 10.0.0.4 ping statistics --- 00:10:17.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.057 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:17.057 00:10:17.057 --- 10.0.0.1 ping statistics --- 00:10:17.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.057 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:17.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:17.057 00:10:17.057 --- 10.0.0.2 ping statistics --- 00:10:17.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.057 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=69469 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 69469 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 69469 ']' 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.057 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.057 [2024-10-01 15:23:16.111397] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:10:17.057 [2024-10-01 15:23:16.111514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.318 [2024-10-01 15:23:16.255480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.318 [2024-10-01 15:23:16.328871] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.318 [2024-10-01 15:23:16.328932] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.318 [2024-10-01 15:23:16.328946] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.318 [2024-10-01 15:23:16.328957] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.318 [2024-10-01 15:23:16.328965] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.318 [2024-10-01 15:23:16.329065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.318 [2024-10-01 15:23:16.329162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.318 [2024-10-01 15:23:16.329220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.318 [2024-10-01 15:23:16.329226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.318 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 [2024-10-01 15:23:16.489794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 Malloc0 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 [2024-10-01 15:23:16.542194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 test case1: single bdev can't be used in multiple subsystems 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 [2024-10-01 15:23:16.565985] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:17.581 [2024-10-01 15:23:16.566035] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:17.581 [2024-10-01 15:23:16.566051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.581 2024/10/01 15:23:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:17.581 request: 00:10:17.581 { 00:10:17.581 "method": "nvmf_subsystem_add_ns", 00:10:17.581 "params": { 00:10:17.581 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:17.581 "namespace": { 00:10:17.581 "bdev_name": "Malloc0", 00:10:17.581 "no_auto_visible": false 00:10:17.581 } 00:10:17.581 } 00:10:17.581 } 00:10:17.581 Got JSON-RPC error response 00:10:17.581 GoRPCClient: error on JSON-RPC call 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:17.581 Adding namespace failed - expected result. 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:17.581 test case2: host connect to nvmf target in multiple paths 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.581 [2024-10-01 15:23:16.578172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:17.581 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:17.843 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.843 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:17.843 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.843 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:17.843 15:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:20.375 15:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.375 [global] 00:10:20.375 thread=1 00:10:20.375 invalidate=1 00:10:20.375 rw=write 00:10:20.375 time_based=1 00:10:20.375 runtime=1 00:10:20.375 ioengine=libaio 00:10:20.375 direct=1 00:10:20.375 bs=4096 00:10:20.375 iodepth=1 00:10:20.375 norandommap=0 00:10:20.375 numjobs=1 00:10:20.375 00:10:20.375 verify_dump=1 00:10:20.375 verify_backlog=512 00:10:20.375 verify_state_save=0 00:10:20.375 do_verify=1 00:10:20.375 verify=crc32c-intel 00:10:20.375 [job0] 00:10:20.375 filename=/dev/nvme0n1 00:10:20.375 Could not set queue depth (nvme0n1) 00:10:20.375 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.375 fio-3.35 00:10:20.375 Starting 1 thread 00:10:21.309 00:10:21.309 job0: (groupid=0, jobs=1): err= 0: pid=69565: Tue Oct 1 15:23:20 2024 00:10:21.309 read: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:10:21.309 slat (nsec): min=14166, max=53064, avg=18556.07, stdev=5921.95 00:10:21.309 clat (usec): min=129, max=487, avg=174.11, stdev=30.27 00:10:21.309 lat (usec): min=145, max=503, avg=192.67, stdev=31.86 00:10:21.309 clat percentiles (usec): 00:10:21.309 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:21.309 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 176], 00:10:21.309 | 70.00th=[ 188], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 227], 00:10:21.309 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 318], 99.95th=[ 392], 00:10:21.309 | 99.99th=[ 486] 00:10:21.309 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:21.309 slat (usec): min=19, max=164, avg=24.27, stdev= 7.29 00:10:21.309 clat (usec): min=88, max=299, avg=128.37, stdev=20.04 00:10:21.309 lat (usec): min=113, max=453, avg=152.63, stdev=21.92 00:10:21.309 clat percentiles (usec): 00:10:21.309 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 111], 00:10:21.309 | 30.00th=[ 116], 40.00th=[ 121], 50.00th=[ 127], 60.00th=[ 135], 00:10:21.309 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:10:21.309 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 198], 99.95th=[ 289], 00:10:21.309 | 99.99th=[ 302] 00:10:21.309 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:21.309 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:21.309 lat (usec) : 100=2.70%, 250=96.69%, 500=0.61% 00:10:21.309 cpu : usr=2.80%, sys=9.10%, ctx=5768, majf=0, minf=5 00:10:21.309 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.309 issued rwts: total=2696,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.309 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.309 00:10:21.309 Run status group 0 (all jobs): 00:10:21.309 READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:10:21.309 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:21.309 00:10:21.309 Disk stats (read/write): 00:10:21.309 nvme0n1: ios=2610/2569, merge=0/0, ticks=487/344, in_queue=831, util=91.28% 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.309 rmmod nvme_tcp 00:10:21.309 rmmod nvme_fabrics 00:10:21.309 rmmod nvme_keyring 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 69469 ']' 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 69469 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 69469 ']' 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 69469 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69469 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.309 killing process with pid 69469 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69469' 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 69469 00:10:21.309 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 69469 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:21.568 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:21.827 00:10:21.827 real 0m5.461s 00:10:21.827 user 0m16.787s 00:10:21.827 sys 0m1.388s 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:21.827 ************************************ 00:10:21.827 END TEST nvmf_nmic 00:10:21.827 ************************************ 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.827 ************************************ 00:10:21.827 START TEST nvmf_fio_target 00:10:21.827 ************************************ 00:10:21.827 15:23:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.088 * Looking for test storage... 00:10:22.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.088 --rc genhtml_branch_coverage=1 00:10:22.088 --rc genhtml_function_coverage=1 00:10:22.088 --rc genhtml_legend=1 00:10:22.088 --rc geninfo_all_blocks=1 00:10:22.088 --rc geninfo_unexecuted_blocks=1 00:10:22.088 00:10:22.088 ' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.088 --rc genhtml_branch_coverage=1 00:10:22.088 --rc genhtml_function_coverage=1 00:10:22.088 --rc genhtml_legend=1 00:10:22.088 --rc geninfo_all_blocks=1 00:10:22.088 --rc geninfo_unexecuted_blocks=1 00:10:22.088 00:10:22.088 ' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.088 --rc genhtml_branch_coverage=1 00:10:22.088 --rc genhtml_function_coverage=1 00:10:22.088 --rc genhtml_legend=1 00:10:22.088 --rc geninfo_all_blocks=1 00:10:22.088 --rc geninfo_unexecuted_blocks=1 00:10:22.088 00:10:22.088 ' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.088 --rc genhtml_branch_coverage=1 00:10:22.088 --rc genhtml_function_coverage=1 00:10:22.088 --rc genhtml_legend=1 00:10:22.088 --rc geninfo_all_blocks=1 00:10:22.088 --rc geninfo_unexecuted_blocks=1 00:10:22.088 00:10:22.088 ' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.088 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:22.089 Cannot find device "nvmf_init_br" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:22.089 Cannot find device "nvmf_init_br2" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:22.089 Cannot find device "nvmf_tgt_br" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.089 Cannot find device "nvmf_tgt_br2" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:22.089 Cannot find device "nvmf_init_br" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:22.089 Cannot find device "nvmf_init_br2" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:22.089 Cannot find device "nvmf_tgt_br" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:22.089 Cannot find device "nvmf_tgt_br2" 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:22.089 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:22.348 Cannot find device "nvmf_br" 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:22.348 Cannot find device "nvmf_init_if" 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:22.348 Cannot find device "nvmf_init_if2" 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.348 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:22.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:22.607 00:10:22.607 --- 10.0.0.3 ping statistics --- 00:10:22.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.607 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:22.607 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:22.607 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:22.607 00:10:22.607 --- 10.0.0.4 ping statistics --- 00:10:22.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.607 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:22.607 00:10:22.607 --- 10.0.0.1 ping statistics --- 00:10:22.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.607 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:22.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:10:22.607 00:10:22.607 --- 10.0.0.2 ping statistics --- 00:10:22.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.607 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=69802 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 69802 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 69802 ']' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.607 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.607 [2024-10-01 15:23:21.642396] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:10:22.607 [2024-10-01 15:23:21.642510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.607 [2024-10-01 15:23:21.774209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.867 [2024-10-01 15:23:21.862173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.867 [2024-10-01 15:23:21.862255] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.867 [2024-10-01 15:23:21.862276] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.867 [2024-10-01 15:23:21.862292] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.867 [2024-10-01 15:23:21.862304] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.867 [2024-10-01 15:23:21.865465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.867 [2024-10-01 15:23:21.865550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.867 [2024-10-01 15:23:21.866102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.867 [2024-10-01 15:23:21.866124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.867 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.867 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:22.867 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:22.867 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.867 15:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.867 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.867 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:23.434 [2024-10-01 15:23:22.305922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.434 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.693 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:23.693 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.953 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:23.953 15:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.211 15:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:24.211 15:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.778 15:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:24.778 15:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:24.778 15:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.344 15:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:25.344 15:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.602 15:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:25.602 15:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.861 15:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:25.861 15:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:26.120 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.378 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:26.378 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.943 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:26.943 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.200 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:27.458 [2024-10-01 15:23:26.408519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:27.458 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:27.717 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:27.975 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:28.234 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:28.234 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:28.234 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.234 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:28.234 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:28.234 15:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:30.135 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:30.135 [global] 00:10:30.135 thread=1 00:10:30.135 invalidate=1 00:10:30.135 rw=write 00:10:30.135 time_based=1 00:10:30.135 runtime=1 00:10:30.135 ioengine=libaio 00:10:30.135 direct=1 00:10:30.135 bs=4096 00:10:30.135 iodepth=1 00:10:30.135 norandommap=0 00:10:30.135 numjobs=1 00:10:30.135 00:10:30.135 verify_dump=1 00:10:30.135 verify_backlog=512 00:10:30.135 verify_state_save=0 00:10:30.135 do_verify=1 00:10:30.135 verify=crc32c-intel 00:10:30.135 [job0] 00:10:30.135 filename=/dev/nvme0n1 00:10:30.135 [job1] 00:10:30.135 filename=/dev/nvme0n2 00:10:30.135 [job2] 00:10:30.135 filename=/dev/nvme0n3 00:10:30.135 [job3] 00:10:30.135 filename=/dev/nvme0n4 00:10:30.135 Could not set queue depth (nvme0n1) 00:10:30.135 Could not set queue depth (nvme0n2) 00:10:30.135 Could not set queue depth (nvme0n3) 00:10:30.135 Could not set queue depth (nvme0n4) 00:10:30.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.393 fio-3.35 00:10:30.393 Starting 4 threads 00:10:31.768 00:10:31.768 job0: (groupid=0, jobs=1): err= 0: pid=70095: Tue Oct 1 15:23:30 2024 00:10:31.768 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:31.768 slat (nsec): min=17179, max=85223, avg=30333.58, stdev=10323.37 00:10:31.768 clat (usec): min=167, max=615, avg=351.80, stdev=64.57 00:10:31.768 lat (usec): min=198, max=642, avg=382.14, stdev=70.49 00:10:31.768 clat percentiles (usec): 00:10:31.768 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:10:31.768 | 30.00th=[ 297], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 363], 00:10:31.768 | 70.00th=[ 392], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 461], 00:10:31.768 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 619], 00:10:31.768 | 99.99th=[ 619] 00:10:31.768 write: IOPS=1537, BW=6150KiB/s (6297kB/s)(6156KiB/1001msec); 0 zone resets 00:10:31.768 slat (usec): min=20, max=149, avg=39.28, stdev=10.68 00:10:31.768 clat (usec): min=136, max=383, avg=222.29, stdev=24.81 00:10:31.768 lat (usec): min=179, max=463, avg=261.57, stdev=23.79 00:10:31.768 clat percentiles (usec): 00:10:31.768 | 1.00th=[ 165], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:10:31.768 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:10:31.768 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:10:31.768 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 367], 99.95th=[ 383], 00:10:31.768 | 99.99th=[ 383] 00:10:31.768 bw ( KiB/s): min= 8192, max= 8192, per=22.63%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.768 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.768 lat (usec) : 250=45.46%, 500=54.28%, 750=0.26% 00:10:31.768 cpu : usr=2.10%, sys=8.50%, ctx=3075, majf=0, minf=7 00:10:31.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.768 issued rwts: total=1536,1539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.768 job1: (groupid=0, jobs=1): err= 0: pid=70096: Tue Oct 1 15:23:30 2024 00:10:31.768 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:31.768 slat (nsec): min=13772, max=46030, avg=16436.01, stdev=3577.62 00:10:31.768 clat (usec): min=148, max=308, avg=182.65, stdev=22.51 00:10:31.768 lat (usec): min=163, max=323, avg=199.09, stdev=22.54 00:10:31.768 clat percentiles (usec): 00:10:31.768 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:31.768 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:10:31.768 | 70.00th=[ 190], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 229], 00:10:31.768 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 273], 00:10:31.768 | 99.99th=[ 310] 00:10:31.768 write: IOPS=2914, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:10:31.768 slat (nsec): min=19527, max=89028, avg=23669.53, stdev=6447.70 00:10:31.768 clat (usec): min=109, max=709, avg=140.90, stdev=23.97 00:10:31.768 lat (usec): min=131, max=741, avg=164.57, stdev=25.76 00:10:31.768 clat percentiles (usec): 00:10:31.768 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:10:31.768 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:10:31.768 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 176], 00:10:31.768 | 99.00th=[ 221], 99.50th=[ 260], 99.90th=[ 375], 99.95th=[ 429], 00:10:31.768 | 99.99th=[ 709] 00:10:31.769 bw ( KiB/s): min=12288, max=12288, per=33.95%, avg=12288.00, stdev= 0.00, samples=1 00:10:31.769 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:31.769 lat (usec) : 250=99.34%, 500=0.64%, 750=0.02% 00:10:31.769 cpu : usr=2.50%, sys=8.20%, ctx=5478, majf=0, minf=17 00:10:31.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.769 issued rwts: total=2560,2917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.769 job2: (groupid=0, jobs=1): err= 0: pid=70097: Tue Oct 1 15:23:30 2024 00:10:31.769 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:31.769 slat (nsec): min=14294, max=68181, avg=19349.99, stdev=5686.78 00:10:31.769 clat (usec): min=181, max=964, avg=263.67, stdev=38.23 00:10:31.769 lat (usec): min=196, max=999, avg=283.02, stdev=40.92 00:10:31.769 clat percentiles (usec): 00:10:31.769 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 247], 00:10:31.769 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:10:31.769 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:10:31.769 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 553], 99.95th=[ 963], 00:10:31.769 | 99.99th=[ 963] 00:10:31.769 write: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec); 0 zone resets 00:10:31.769 slat (usec): min=19, max=126, avg=30.77, stdev= 7.82 00:10:31.769 clat (usec): min=114, max=423, avg=242.08, stdev=63.10 00:10:31.769 lat (usec): min=142, max=460, avg=272.85, stdev=65.07 00:10:31.769 clat percentiles (usec): 00:10:31.769 | 1.00th=[ 130], 5.00th=[ 141], 10.00th=[ 151], 20.00th=[ 206], 00:10:31.769 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:10:31.769 | 70.00th=[ 251], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 355], 00:10:31.769 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 420], 99.95th=[ 424], 00:10:31.769 | 99.99th=[ 424] 00:10:31.769 bw ( KiB/s): min= 8192, max= 8192, per=22.63%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.769 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.769 lat (usec) : 250=49.75%, 500=50.20%, 750=0.03%, 1000=0.03% 00:10:31.769 cpu : usr=1.20%, sys=7.60%, ctx=3578, majf=0, minf=7 00:10:31.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.769 issued rwts: total=1536,2042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.769 job3: (groupid=0, jobs=1): err= 0: pid=70098: Tue Oct 1 15:23:30 2024 00:10:31.769 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(9.81MiB/1001msec) 00:10:31.769 slat (nsec): min=13800, max=66419, avg=20584.48, stdev=7964.47 00:10:31.769 clat (usec): min=149, max=835, avg=206.73, stdev=73.88 00:10:31.769 lat (usec): min=163, max=849, avg=227.32, stdev=79.09 00:10:31.769 clat percentiles (usec): 00:10:31.769 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:31.769 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:31.769 | 70.00th=[ 186], 80.00th=[ 208], 90.00th=[ 343], 95.00th=[ 359], 00:10:31.769 | 99.00th=[ 441], 99.50th=[ 502], 99.90th=[ 529], 99.95th=[ 529], 00:10:31.769 | 99.99th=[ 832] 00:10:31.769 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.769 slat (nsec): min=19576, max=85906, avg=25289.13, stdev=8448.08 00:10:31.769 clat (usec): min=111, max=2117, avg=138.00, stdev=44.63 00:10:31.769 lat (usec): min=135, max=2138, avg=163.29, stdev=46.62 00:10:31.769 clat percentiles (usec): 00:10:31.769 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:10:31.769 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:31.769 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:10:31.769 | 99.00th=[ 204], 99.50th=[ 285], 99.90th=[ 437], 99.95th=[ 537], 00:10:31.769 | 99.99th=[ 2114] 00:10:31.769 bw ( KiB/s): min=12288, max=12288, per=33.95%, avg=12288.00, stdev= 0.00, samples=1 00:10:31.769 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:31.769 lat (usec) : 250=90.34%, 500=9.35%, 750=0.28%, 1000=0.02% 00:10:31.769 lat (msec) : 4=0.02% 00:10:31.769 cpu : usr=2.20%, sys=9.20%, ctx=5073, majf=0, minf=8 00:10:31.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.769 issued rwts: total=2512,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.769 00:10:31.769 Run status group 0 (all jobs): 00:10:31.769 READ: bw=31.8MiB/s (33.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.8MiB (33.4MB), run=1001-1001msec 00:10:31.769 WRITE: bw=35.3MiB/s (37.1MB/s), 6150KiB/s-11.4MiB/s (6297kB/s-11.9MB/s), io=35.4MiB (37.1MB), run=1001-1001msec 00:10:31.769 00:10:31.769 Disk stats (read/write): 00:10:31.769 nvme0n1: ios=1243/1536, merge=0/0, ticks=495/362, in_queue=857, util=93.09% 00:10:31.769 nvme0n2: ios=2222/2560, merge=0/0, ticks=477/373, in_queue=850, util=93.12% 00:10:31.769 nvme0n3: ios=1536/1668, merge=0/0, ticks=408/395, in_queue=803, util=89.36% 00:10:31.769 nvme0n4: ios=2161/2560, merge=0/0, ticks=401/382, in_queue=783, util=89.82% 00:10:31.769 15:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:31.769 [global] 00:10:31.769 thread=1 00:10:31.769 invalidate=1 00:10:31.769 rw=randwrite 00:10:31.769 time_based=1 00:10:31.769 runtime=1 00:10:31.769 ioengine=libaio 00:10:31.769 direct=1 00:10:31.769 bs=4096 00:10:31.769 iodepth=1 00:10:31.769 norandommap=0 00:10:31.769 numjobs=1 00:10:31.769 00:10:31.769 verify_dump=1 00:10:31.769 verify_backlog=512 00:10:31.769 verify_state_save=0 00:10:31.769 do_verify=1 00:10:31.769 verify=crc32c-intel 00:10:31.769 [job0] 00:10:31.769 filename=/dev/nvme0n1 00:10:31.769 [job1] 00:10:31.769 filename=/dev/nvme0n2 00:10:31.769 [job2] 00:10:31.769 filename=/dev/nvme0n3 00:10:31.769 [job3] 00:10:31.769 filename=/dev/nvme0n4 00:10:31.769 Could not set queue depth (nvme0n1) 00:10:31.769 Could not set queue depth (nvme0n2) 00:10:31.769 Could not set queue depth (nvme0n3) 00:10:31.769 Could not set queue depth (nvme0n4) 00:10:31.769 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.769 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.769 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.769 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.769 fio-3.35 00:10:31.769 Starting 4 threads 00:10:33.144 00:10:33.144 job0: (groupid=0, jobs=1): err= 0: pid=70151: Tue Oct 1 15:23:31 2024 00:10:33.144 read: IOPS=1864, BW=7457KiB/s (7636kB/s)(7464KiB/1001msec) 00:10:33.144 slat (usec): min=13, max=106, avg=19.04, stdev= 7.96 00:10:33.144 clat (usec): min=139, max=6274, avg=242.80, stdev=276.88 00:10:33.144 lat (usec): min=154, max=6295, avg=261.85, stdev=279.11 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:33.144 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 190], 00:10:33.144 | 70.00th=[ 243], 80.00th=[ 285], 90.00th=[ 379], 95.00th=[ 506], 00:10:33.144 | 99.00th=[ 586], 99.50th=[ 799], 99.90th=[ 5080], 99.95th=[ 6259], 00:10:33.144 | 99.99th=[ 6259] 00:10:33.144 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:33.144 slat (usec): min=19, max=105, avg=33.72, stdev=12.61 00:10:33.144 clat (usec): min=103, max=1931, avg=211.39, stdev=83.92 00:10:33.144 lat (usec): min=126, max=1956, avg=245.11, stdev=89.78 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 135], 00:10:33.144 | 30.00th=[ 153], 40.00th=[ 165], 50.00th=[ 217], 60.00th=[ 247], 00:10:33.144 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 334], 00:10:33.144 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 627], 99.95th=[ 1057], 00:10:33.144 | 99.99th=[ 1926] 00:10:33.144 bw ( KiB/s): min= 8175, max= 8175, per=25.10%, avg=8175.00, stdev= 0.00, samples=1 00:10:33.144 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:33.144 lat (usec) : 250=67.63%, 500=29.87%, 750=2.15%, 1000=0.10% 00:10:33.144 lat (msec) : 2=0.08%, 4=0.08%, 10=0.10% 00:10:33.144 cpu : usr=2.20%, sys=7.90%, ctx=3915, majf=0, minf=11 00:10:33.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.144 issued rwts: total=1866,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.144 job1: (groupid=0, jobs=1): err= 0: pid=70152: Tue Oct 1 15:23:31 2024 00:10:33.144 read: IOPS=1064, BW=4260KiB/s (4362kB/s)(4264KiB/1001msec) 00:10:33.144 slat (nsec): min=11018, max=54499, avg=23385.81, stdev=5477.66 00:10:33.144 clat (usec): min=200, max=880, avg=428.58, stdev=85.36 00:10:33.144 lat (usec): min=231, max=914, avg=451.97, stdev=85.27 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:10:33.144 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 400], 00:10:33.144 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 570], 00:10:33.144 | 99.00th=[ 619], 99.50th=[ 725], 99.90th=[ 775], 99.95th=[ 881], 00:10:33.144 | 99.99th=[ 881] 00:10:33.144 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:33.144 slat (nsec): min=19684, max=69706, avg=34691.48, stdev=7232.13 00:10:33.144 clat (usec): min=153, max=936, avg=297.61, stdev=54.73 00:10:33.144 lat (usec): min=205, max=981, avg=332.30, stdev=55.22 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:10:33.144 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:10:33.144 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 363], 95.00th=[ 388], 00:10:33.144 | 99.00th=[ 445], 99.50th=[ 635], 99.90th=[ 865], 99.95th=[ 938], 00:10:33.144 | 99.99th=[ 938] 00:10:33.144 bw ( KiB/s): min= 5616, max= 5616, per=17.25%, avg=5616.00, stdev= 0.00, samples=1 00:10:33.144 iops : min= 1404, max= 1404, avg=1404.00, stdev= 0.00, samples=1 00:10:33.144 lat (usec) : 250=2.92%, 500=84.67%, 750=12.14%, 1000=0.27% 00:10:33.144 cpu : usr=1.20%, sys=6.90%, ctx=2617, majf=0, minf=7 00:10:33.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.144 issued rwts: total=1066,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.144 job2: (groupid=0, jobs=1): err= 0: pid=70153: Tue Oct 1 15:23:31 2024 00:10:33.144 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:33.144 slat (nsec): min=13635, max=55507, avg=18260.95, stdev=6251.62 00:10:33.144 clat (usec): min=146, max=2116, avg=174.90, stdev=43.52 00:10:33.144 lat (usec): min=161, max=2131, avg=193.16, stdev=44.24 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:10:33.144 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:10:33.144 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 200], 00:10:33.144 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 416], 99.95th=[ 734], 00:10:33.144 | 99.99th=[ 2114] 00:10:33.144 write: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:10:33.144 slat (nsec): min=19574, max=88049, avg=26631.19, stdev=8002.96 00:10:33.144 clat (usec): min=109, max=285, avg=136.22, stdev=18.83 00:10:33.144 lat (usec): min=130, max=373, avg=162.85, stdev=23.21 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 124], 00:10:33.144 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:10:33.144 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 157], 95.00th=[ 180], 00:10:33.144 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 231], 99.95th=[ 265], 00:10:33.144 | 99.99th=[ 285] 00:10:33.144 bw ( KiB/s): min=12288, max=12288, per=37.74%, avg=12288.00, stdev= 0.00, samples=1 00:10:33.144 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:33.144 lat (usec) : 250=99.45%, 500=0.52%, 750=0.02% 00:10:33.144 lat (msec) : 4=0.02% 00:10:33.144 cpu : usr=2.60%, sys=9.70%, ctx=5590, majf=0, minf=13 00:10:33.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.144 issued rwts: total=2560,3029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.144 job3: (groupid=0, jobs=1): err= 0: pid=70154: Tue Oct 1 15:23:31 2024 00:10:33.144 read: IOPS=1065, BW=4264KiB/s (4366kB/s)(4268KiB/1001msec) 00:10:33.144 slat (nsec): min=13022, max=68868, avg=23833.90, stdev=5479.93 00:10:33.144 clat (usec): min=265, max=873, avg=428.03, stdev=85.56 00:10:33.144 lat (usec): min=289, max=898, avg=451.87, stdev=85.44 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:10:33.144 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 404], 00:10:33.144 | 70.00th=[ 490], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 570], 00:10:33.144 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[ 807], 99.95th=[ 873], 00:10:33.144 | 99.99th=[ 873] 00:10:33.144 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:33.144 slat (usec): min=18, max=481, avg=36.37, stdev=13.74 00:10:33.144 clat (usec): min=4, max=768, avg=295.57, stdev=54.72 00:10:33.144 lat (usec): min=203, max=807, avg=331.93, stdev=55.27 00:10:33.144 clat percentiles (usec): 00:10:33.144 | 1.00th=[ 231], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:10:33.144 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:10:33.144 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 359], 95.00th=[ 383], 00:10:33.144 | 99.00th=[ 570], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 766], 00:10:33.144 | 99.99th=[ 766] 00:10:33.144 bw ( KiB/s): min= 5620, max= 5620, per=17.26%, avg=5620.00, stdev= 0.00, samples=1 00:10:33.144 iops : min= 1405, max= 1405, avg=1405.00, stdev= 0.00, samples=1 00:10:33.144 lat (usec) : 10=0.04%, 250=2.65%, 500=85.06%, 750=12.10%, 1000=0.15% 00:10:33.144 cpu : usr=2.00%, sys=6.30%, ctx=2610, majf=0, minf=15 00:10:33.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.145 issued rwts: total=1067,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.145 00:10:33.145 Run status group 0 (all jobs): 00:10:33.145 READ: bw=25.6MiB/s (26.8MB/s), 4260KiB/s-9.99MiB/s (4362kB/s-10.5MB/s), io=25.6MiB (26.9MB), run=1001-1001msec 00:10:33.145 WRITE: bw=31.8MiB/s (33.3MB/s), 6138KiB/s-11.8MiB/s (6285kB/s-12.4MB/s), io=31.8MiB (33.4MB), run=1001-1001msec 00:10:33.145 00:10:33.145 Disk stats (read/write): 00:10:33.145 nvme0n1: ios=1586/1618, merge=0/0, ticks=426/399, in_queue=825, util=88.98% 00:10:33.145 nvme0n2: ios=1073/1182, merge=0/0, ticks=488/365, in_queue=853, util=90.41% 00:10:33.145 nvme0n3: ios=2359/2560, merge=0/0, ticks=425/372, in_queue=797, util=89.34% 00:10:33.145 nvme0n4: ios=1024/1184, merge=0/0, ticks=429/375, in_queue=804, util=89.80% 00:10:33.145 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:33.145 [global] 00:10:33.145 thread=1 00:10:33.145 invalidate=1 00:10:33.145 rw=write 00:10:33.145 time_based=1 00:10:33.145 runtime=1 00:10:33.145 ioengine=libaio 00:10:33.145 direct=1 00:10:33.145 bs=4096 00:10:33.145 iodepth=128 00:10:33.145 norandommap=0 00:10:33.145 numjobs=1 00:10:33.145 00:10:33.145 verify_dump=1 00:10:33.145 verify_backlog=512 00:10:33.145 verify_state_save=0 00:10:33.145 do_verify=1 00:10:33.145 verify=crc32c-intel 00:10:33.145 [job0] 00:10:33.145 filename=/dev/nvme0n1 00:10:33.145 [job1] 00:10:33.145 filename=/dev/nvme0n2 00:10:33.145 [job2] 00:10:33.145 filename=/dev/nvme0n3 00:10:33.145 [job3] 00:10:33.145 filename=/dev/nvme0n4 00:10:33.145 Could not set queue depth (nvme0n1) 00:10:33.145 Could not set queue depth (nvme0n2) 00:10:33.145 Could not set queue depth (nvme0n3) 00:10:33.145 Could not set queue depth (nvme0n4) 00:10:33.145 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.145 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.145 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.145 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.145 fio-3.35 00:10:33.145 Starting 4 threads 00:10:34.518 00:10:34.518 job0: (groupid=0, jobs=1): err= 0: pid=70209: Tue Oct 1 15:23:33 2024 00:10:34.518 read: IOPS=2541, BW=9.93MiB/s (10.4MB/s)(9.98MiB/1005msec) 00:10:34.518 slat (usec): min=2, max=7767, avg=198.38, stdev=836.01 00:10:34.518 clat (usec): min=1406, max=32949, avg=25297.13, stdev=4221.43 00:10:34.518 lat (usec): min=5761, max=32966, avg=25495.50, stdev=4177.30 00:10:34.518 clat percentiles (usec): 00:10:34.518 | 1.00th=[ 7111], 5.00th=[16581], 10.00th=[20317], 20.00th=[22676], 00:10:34.518 | 30.00th=[23987], 40.00th=[25035], 50.00th=[25297], 60.00th=[26608], 00:10:34.518 | 70.00th=[28443], 80.00th=[29230], 90.00th=[29492], 95.00th=[30016], 00:10:34.518 | 99.00th=[32113], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:10:34.518 | 99.99th=[32900] 00:10:34.518 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:10:34.518 slat (usec): min=9, max=7828, avg=186.65, stdev=865.18 00:10:34.518 clat (usec): min=12208, max=34175, avg=24256.82, stdev=4224.35 00:10:34.518 lat (usec): min=13491, max=34198, avg=24443.47, stdev=4186.04 00:10:34.518 clat percentiles (usec): 00:10:34.518 | 1.00th=[14746], 5.00th=[17171], 10.00th=[18744], 20.00th=[20055], 00:10:34.518 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23987], 60.00th=[25297], 00:10:34.518 | 70.00th=[27395], 80.00th=[28967], 90.00th=[29492], 95.00th=[29754], 00:10:34.518 | 99.00th=[32113], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:10:34.518 | 99.99th=[34341] 00:10:34.518 bw ( KiB/s): min= 8192, max=12263, per=17.73%, avg=10227.50, stdev=2878.63, samples=2 00:10:34.518 iops : min= 2048, max= 3065, avg=2556.50, stdev=719.13, samples=2 00:10:34.518 lat (msec) : 2=0.02%, 10=0.63%, 20=12.69%, 50=86.66% 00:10:34.518 cpu : usr=2.19%, sys=6.57%, ctx=612, majf=0, minf=13 00:10:34.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:34.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.519 issued rwts: total=2554,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.519 job1: (groupid=0, jobs=1): err= 0: pid=70210: Tue Oct 1 15:23:33 2024 00:10:34.519 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:34.519 slat (usec): min=5, max=5176, avg=108.40, stdev=476.18 00:10:34.519 clat (usec): min=3411, max=22752, avg=14099.21, stdev=2905.88 00:10:34.519 lat (usec): min=3419, max=22781, avg=14207.62, stdev=2938.75 00:10:34.519 clat percentiles (usec): 00:10:34.519 | 1.00th=[ 7242], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469], 00:10:34.519 | 30.00th=[12125], 40.00th=[13042], 50.00th=[14091], 60.00th=[14746], 00:10:34.519 | 70.00th=[15795], 80.00th=[16909], 90.00th=[18220], 95.00th=[18744], 00:10:34.519 | 99.00th=[20841], 99.50th=[21365], 99.90th=[22414], 99.95th=[22414], 00:10:34.519 | 99.99th=[22676] 00:10:34.519 write: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:34.519 slat (usec): min=8, max=5010, avg=100.83, stdev=461.21 00:10:34.519 clat (usec): min=909, max=22523, avg=13342.37, stdev=2676.80 00:10:34.519 lat (usec): min=924, max=22536, avg=13443.20, stdev=2702.29 00:10:34.519 clat percentiles (usec): 00:10:34.519 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10421], 20.00th=[11076], 00:10:34.519 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[13435], 00:10:34.519 | 70.00th=[13829], 80.00th=[15664], 90.00th=[17433], 95.00th=[17695], 00:10:34.519 | 99.00th=[20841], 99.50th=[21103], 99.90th=[22152], 99.95th=[22152], 00:10:34.519 | 99.99th=[22414] 00:10:34.519 bw ( KiB/s): min=16160, max=20704, per=31.96%, avg=18432.00, stdev=3213.09, samples=2 00:10:34.519 iops : min= 4040, max= 5176, avg=4608.00, stdev=803.27, samples=2 00:10:34.519 lat (usec) : 1000=0.03% 00:10:34.519 lat (msec) : 4=0.16%, 10=4.99%, 20=93.16%, 50=1.66% 00:10:34.519 cpu : usr=4.30%, sys=11.29%, ctx=461, majf=0, minf=4 00:10:34.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:34.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.519 issued rwts: total=4608,4617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.519 job2: (groupid=0, jobs=1): err= 0: pid=70211: Tue Oct 1 15:23:33 2024 00:10:34.519 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:34.519 slat (usec): min=2, max=8992, avg=160.18, stdev=696.07 00:10:34.519 clat (usec): min=10179, max=33329, avg=20742.07, stdev=6198.92 00:10:34.519 lat (usec): min=10359, max=33341, avg=20902.25, stdev=6222.04 00:10:34.519 clat percentiles (usec): 00:10:34.519 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[13960], 00:10:34.519 | 30.00th=[15401], 40.00th=[17171], 50.00th=[20317], 60.00th=[23200], 00:10:34.519 | 70.00th=[25035], 80.00th=[26608], 90.00th=[29492], 95.00th=[30278], 00:10:34.519 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:10:34.519 | 99.99th=[33424] 00:10:34.519 write: IOPS=3203, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1004msec); 0 zone resets 00:10:34.519 slat (usec): min=4, max=6345, avg=151.52, stdev=667.04 00:10:34.519 clat (usec): min=2263, max=36034, avg=19624.25, stdev=6587.41 00:10:34.519 lat (usec): min=4399, max=36055, avg=19775.77, stdev=6605.82 00:10:34.519 clat percentiles (usec): 00:10:34.519 | 1.00th=[ 7177], 5.00th=[11207], 10.00th=[12518], 20.00th=[13829], 00:10:34.519 | 30.00th=[14091], 40.00th=[15533], 50.00th=[17695], 60.00th=[22938], 00:10:34.519 | 70.00th=[23987], 80.00th=[25560], 90.00th=[29492], 95.00th=[29754], 00:10:34.519 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:10:34.519 | 99.99th=[35914] 00:10:34.519 bw ( KiB/s): min= 8320, max=16384, per=21.42%, avg=12352.00, stdev=5702.11, samples=2 00:10:34.519 iops : min= 2080, max= 4096, avg=3088.00, stdev=1425.53, samples=2 00:10:34.519 lat (msec) : 4=0.02%, 10=0.80%, 20=50.62%, 50=48.57% 00:10:34.519 cpu : usr=1.69%, sys=8.67%, ctx=636, majf=0, minf=17 00:10:34.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:34.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.519 issued rwts: total=3072,3216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.519 job3: (groupid=0, jobs=1): err= 0: pid=70212: Tue Oct 1 15:23:33 2024 00:10:34.519 read: IOPS=3598, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1003msec) 00:10:34.519 slat (usec): min=5, max=9806, avg=126.88, stdev=647.40 00:10:34.519 clat (usec): min=828, max=36200, avg=15706.54, stdev=4709.19 00:10:34.519 lat (usec): min=4799, max=38860, avg=15833.42, stdev=4759.98 00:10:34.519 clat percentiles (usec): 00:10:34.519 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11731], 20.00th=[12780], 00:10:34.519 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14222], 60.00th=[14877], 00:10:34.519 | 70.00th=[15533], 80.00th=[17433], 90.00th=[24249], 95.00th=[25035], 00:10:34.519 | 99.00th=[32113], 99.50th=[32375], 99.90th=[36439], 99.95th=[36439], 00:10:34.519 | 99.99th=[36439] 00:10:34.519 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:34.519 slat (usec): min=9, max=6653, avg=125.49, stdev=539.22 00:10:34.519 clat (usec): min=5051, max=43830, avg=17000.25, stdev=6921.65 00:10:34.519 lat (usec): min=5063, max=43856, avg=17125.74, stdev=6971.36 00:10:34.519 clat percentiles (usec): 00:10:34.519 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[12518], 20.00th=[12649], 00:10:34.519 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14222], 60.00th=[14484], 00:10:34.519 | 70.00th=[15664], 80.00th=[21627], 90.00th=[28443], 95.00th=[31589], 00:10:34.519 | 99.00th=[41681], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:10:34.519 | 99.99th=[43779] 00:10:34.519 bw ( KiB/s): min=12288, max=19656, per=27.70%, avg=15972.00, stdev=5209.96, samples=2 00:10:34.519 iops : min= 3072, max= 4914, avg=3993.00, stdev=1302.49, samples=2 00:10:34.519 lat (usec) : 1000=0.01% 00:10:34.519 lat (msec) : 10=1.36%, 20=78.17%, 50=20.45% 00:10:34.519 cpu : usr=3.39%, sys=10.28%, ctx=503, majf=0, minf=12 00:10:34.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:34.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.519 issued rwts: total=3609,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.519 00:10:34.519 Run status group 0 (all jobs): 00:10:34.519 READ: bw=53.8MiB/s (56.4MB/s), 9.93MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=54.1MiB (56.7MB), run=1002-1005msec 00:10:34.519 WRITE: bw=56.3MiB/s (59.1MB/s), 9.95MiB/s-18.0MiB/s (10.4MB/s-18.9MB/s), io=56.6MiB (59.3MB), run=1002-1005msec 00:10:34.519 00:10:34.519 Disk stats (read/write): 00:10:34.519 nvme0n1: ios=2098/2320, merge=0/0, ticks=13013/12093, in_queue=25106, util=87.98% 00:10:34.519 nvme0n2: ios=4053/4096, merge=0/0, ticks=17433/14748, in_queue=32181, util=88.59% 00:10:34.519 nvme0n3: ios=2560/2956, merge=0/0, ticks=12416/12520, in_queue=24936, util=88.65% 00:10:34.519 nvme0n4: ios=3072/3323, merge=0/0, ticks=16102/17637, in_queue=33739, util=89.68% 00:10:34.519 15:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:34.519 [global] 00:10:34.519 thread=1 00:10:34.519 invalidate=1 00:10:34.519 rw=randwrite 00:10:34.519 time_based=1 00:10:34.519 runtime=1 00:10:34.519 ioengine=libaio 00:10:34.519 direct=1 00:10:34.519 bs=4096 00:10:34.519 iodepth=128 00:10:34.519 norandommap=0 00:10:34.520 numjobs=1 00:10:34.520 00:10:34.520 verify_dump=1 00:10:34.520 verify_backlog=512 00:10:34.520 verify_state_save=0 00:10:34.520 do_verify=1 00:10:34.520 verify=crc32c-intel 00:10:34.520 [job0] 00:10:34.520 filename=/dev/nvme0n1 00:10:34.520 [job1] 00:10:34.520 filename=/dev/nvme0n2 00:10:34.520 [job2] 00:10:34.520 filename=/dev/nvme0n3 00:10:34.520 [job3] 00:10:34.520 filename=/dev/nvme0n4 00:10:34.520 Could not set queue depth (nvme0n1) 00:10:34.520 Could not set queue depth (nvme0n2) 00:10:34.520 Could not set queue depth (nvme0n3) 00:10:34.520 Could not set queue depth (nvme0n4) 00:10:34.520 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.520 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.520 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.520 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.520 fio-3.35 00:10:34.520 Starting 4 threads 00:10:35.902 00:10:35.902 job0: (groupid=0, jobs=1): err= 0: pid=70271: Tue Oct 1 15:23:34 2024 00:10:35.902 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec) 00:10:35.902 slat (usec): min=2, max=10541, avg=85.33, stdev=561.45 00:10:35.902 clat (usec): min=3911, max=22091, avg=10763.87, stdev=2877.47 00:10:35.902 lat (usec): min=3921, max=22102, avg=10849.20, stdev=2908.80 00:10:35.902 clat percentiles (usec): 00:10:35.902 | 1.00th=[ 4686], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8848], 00:10:35.902 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:10:35.902 | 70.00th=[11338], 80.00th=[12780], 90.00th=[14877], 95.00th=[16909], 00:10:35.902 | 99.00th=[19792], 99.50th=[20841], 99.90th=[21627], 99.95th=[22152], 00:10:35.902 | 99.99th=[22152] 00:10:35.902 write: IOPS=6241, BW=24.4MiB/s (25.6MB/s)(24.6MiB/1011msec); 0 zone resets 00:10:35.902 slat (usec): min=3, max=8733, avg=69.53, stdev=352.83 00:10:35.902 clat (usec): min=3233, max=22049, avg=9828.43, stdev=2139.88 00:10:35.902 lat (usec): min=3259, max=22056, avg=9897.96, stdev=2169.12 00:10:35.902 clat percentiles (usec): 00:10:35.902 | 1.00th=[ 3949], 5.00th=[ 5276], 10.00th=[ 6521], 20.00th=[ 8717], 00:10:35.902 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:10:35.902 | 70.00th=[10683], 80.00th=[11076], 90.00th=[12256], 95.00th=[12649], 00:10:35.902 | 99.00th=[14091], 99.50th=[14484], 99.90th=[19530], 99.95th=[21627], 00:10:35.902 | 99.99th=[22152] 00:10:35.902 bw ( KiB/s): min=24576, max=24888, per=48.85%, avg=24732.00, stdev=220.62, samples=2 00:10:35.902 iops : min= 6144, max= 6222, avg=6183.00, stdev=55.15, samples=2 00:10:35.902 lat (msec) : 4=0.65%, 10=43.43%, 20=55.53%, 50=0.39% 00:10:35.902 cpu : usr=4.46%, sys=11.19%, ctx=820, majf=0, minf=13 00:10:35.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:35.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.902 issued rwts: total=6144,6310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.902 job1: (groupid=0, jobs=1): err= 0: pid=70272: Tue Oct 1 15:23:34 2024 00:10:35.902 read: IOPS=1654, BW=6619KiB/s (6778kB/s)(6652KiB/1005msec) 00:10:35.902 slat (usec): min=3, max=11342, avg=234.66, stdev=1061.27 00:10:35.902 clat (usec): min=2866, max=54643, avg=25971.93, stdev=6703.52 00:10:35.902 lat (usec): min=6820, max=54656, avg=26206.59, stdev=6812.68 00:10:35.902 clat percentiles (usec): 00:10:35.902 | 1.00th=[ 7242], 5.00th=[17171], 10.00th=[21365], 20.00th=[22414], 00:10:35.902 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[26346], 00:10:35.902 | 70.00th=[28181], 80.00th=[30802], 90.00th=[33817], 95.00th=[38011], 00:10:35.902 | 99.00th=[47449], 99.50th=[49021], 99.90th=[54789], 99.95th=[54789], 00:10:35.902 | 99.99th=[54789] 00:10:35.902 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:10:35.902 slat (usec): min=4, max=21277, avg=291.98, stdev=1313.19 00:10:35.902 clat (usec): min=16062, max=87858, avg=40094.66, stdev=19321.95 00:10:35.902 lat (usec): min=16093, max=88473, avg=40386.64, stdev=19450.10 00:10:35.902 clat percentiles (usec): 00:10:35.902 | 1.00th=[19006], 5.00th=[19792], 10.00th=[20579], 20.00th=[23725], 00:10:35.902 | 30.00th=[23987], 40.00th=[25035], 50.00th=[35390], 60.00th=[45351], 00:10:35.902 | 70.00th=[46924], 80.00th=[54789], 90.00th=[73925], 95.00th=[79168], 00:10:35.902 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[87557], 00:10:35.902 | 99.99th=[87557] 00:10:35.902 bw ( KiB/s): min= 8184, max= 8192, per=16.17%, avg=8188.00, stdev= 5.66, samples=2 00:10:35.902 iops : min= 2046, max= 2048, avg=2047.00, stdev= 1.41, samples=2 00:10:35.902 lat (msec) : 4=0.03%, 10=0.54%, 20=6.84%, 50=78.68%, 100=13.90% 00:10:35.902 cpu : usr=1.59%, sys=4.48%, ctx=510, majf=0, minf=15 00:10:35.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:10:35.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.902 issued rwts: total=1663,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.902 job2: (groupid=0, jobs=1): err= 0: pid=70273: Tue Oct 1 15:23:34 2024 00:10:35.902 read: IOPS=1741, BW=6967KiB/s (7134kB/s)(7016KiB/1007msec) 00:10:35.902 slat (usec): min=3, max=15731, avg=250.05, stdev=1167.62 00:10:35.902 clat (usec): min=6191, max=81095, avg=29394.25, stdev=11635.14 00:10:35.902 lat (usec): min=6201, max=81266, avg=29644.30, stdev=11746.86 00:10:35.902 clat percentiles (usec): 00:10:35.902 | 1.00th=[11338], 5.00th=[17695], 10.00th=[21890], 20.00th=[22676], 00:10:35.902 | 30.00th=[23200], 40.00th=[23725], 50.00th=[26608], 60.00th=[27919], 00:10:35.902 | 70.00th=[29230], 80.00th=[33817], 90.00th=[43779], 95.00th=[51643], 00:10:35.902 | 99.00th=[76022], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:10:35.902 | 99.99th=[81265] 00:10:35.902 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:10:35.903 slat (usec): min=4, max=13501, avg=267.43, stdev=1182.82 00:10:35.903 clat (usec): min=13940, max=86356, avg=37134.91, stdev=19087.02 00:10:35.903 lat (usec): min=13964, max=86389, avg=37402.34, stdev=19230.14 00:10:35.903 clat percentiles (usec): 00:10:35.903 | 1.00th=[15795], 5.00th=[17957], 10.00th=[19530], 20.00th=[20841], 00:10:35.903 | 30.00th=[23725], 40.00th=[24773], 50.00th=[26608], 60.00th=[40633], 00:10:35.903 | 70.00th=[45876], 80.00th=[53216], 90.00th=[69731], 95.00th=[78119], 00:10:35.903 | 99.00th=[80217], 99.50th=[81265], 99.90th=[85459], 99.95th=[86508], 00:10:35.903 | 99.99th=[86508] 00:10:35.903 bw ( KiB/s): min= 8192, max= 8208, per=16.20%, avg=8200.00, stdev=11.31, samples=2 00:10:35.903 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:10:35.903 lat (msec) : 10=0.29%, 20=10.92%, 50=74.59%, 100=14.20% 00:10:35.903 cpu : usr=0.99%, sys=5.37%, ctx=483, majf=0, minf=7 00:10:35.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:10:35.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.903 issued rwts: total=1754,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.903 job3: (groupid=0, jobs=1): err= 0: pid=70274: Tue Oct 1 15:23:34 2024 00:10:35.903 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:10:35.903 slat (usec): min=4, max=21087, avg=163.03, stdev=1199.36 00:10:35.903 clat (usec): min=9923, max=56613, avg=20178.03, stdev=9250.82 00:10:35.903 lat (usec): min=9943, max=56625, avg=20341.07, stdev=9354.64 00:10:35.903 clat percentiles (usec): 00:10:35.903 | 1.00th=[11207], 5.00th=[13698], 10.00th=[13829], 20.00th=[13960], 00:10:35.903 | 30.00th=[14353], 40.00th=[14484], 50.00th=[15664], 60.00th=[16909], 00:10:35.903 | 70.00th=[20317], 80.00th=[26870], 90.00th=[35914], 95.00th=[41681], 00:10:35.903 | 99.00th=[45351], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:10:35.903 | 99.99th=[56361] 00:10:35.903 write: IOPS=2367, BW=9469KiB/s (9697kB/s)(9564KiB/1010msec); 0 zone resets 00:10:35.903 slat (usec): min=9, max=23743, avg=272.22, stdev=1310.49 00:10:35.903 clat (msec): min=8, max=117, avg=35.90, stdev=23.25 00:10:35.903 lat (msec): min=12, max=117, avg=36.17, stdev=23.40 00:10:35.903 clat percentiles (msec): 00:10:35.903 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 22], 00:10:35.903 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 27], 00:10:35.903 | 70.00th=[ 41], 80.00th=[ 50], 90.00th=[ 75], 95.00th=[ 86], 00:10:35.903 | 99.00th=[ 107], 99.50th=[ 116], 99.90th=[ 117], 99.95th=[ 117], 00:10:35.903 | 99.99th=[ 117] 00:10:35.903 bw ( KiB/s): min= 8064, max=10060, per=17.90%, avg=9062.00, stdev=1411.39, samples=2 00:10:35.903 iops : min= 2016, max= 2515, avg=2265.50, stdev=352.85, samples=2 00:10:35.903 lat (msec) : 10=0.16%, 20=37.98%, 50=51.86%, 100=8.79%, 250=1.22% 00:10:35.903 cpu : usr=1.49%, sys=6.05%, ctx=278, majf=0, minf=11 00:10:35.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:35.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.903 issued rwts: total=2048,2391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.903 00:10:35.903 Run status group 0 (all jobs): 00:10:35.903 READ: bw=44.9MiB/s (47.0MB/s), 6619KiB/s-23.7MiB/s (6778kB/s-24.9MB/s), io=45.3MiB (47.6MB), run=1005-1011msec 00:10:35.903 WRITE: bw=49.4MiB/s (51.8MB/s), 8135KiB/s-24.4MiB/s (8330kB/s-25.6MB/s), io=50.0MiB (52.4MB), run=1005-1011msec 00:10:35.903 00:10:35.903 Disk stats (read/write): 00:10:35.903 nvme0n1: ios=5170/5487, merge=0/0, ticks=51831/51754, in_queue=103585, util=88.58% 00:10:35.903 nvme0n2: ios=1386/1536, merge=0/0, ticks=17873/33710, in_queue=51583, util=87.35% 00:10:35.903 nvme0n3: ios=1435/1536, merge=0/0, ticks=22665/30653, in_queue=53318, util=89.07% 00:10:35.903 nvme0n4: ios=2048/2143, merge=0/0, ticks=19681/31487, in_queue=51168, util=89.83% 00:10:35.903 15:23:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:35.903 15:23:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70292 00:10:35.903 15:23:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:35.903 15:23:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:35.903 [global] 00:10:35.903 thread=1 00:10:35.903 invalidate=1 00:10:35.903 rw=read 00:10:35.903 time_based=1 00:10:35.903 runtime=10 00:10:35.903 ioengine=libaio 00:10:35.903 direct=1 00:10:35.903 bs=4096 00:10:35.903 iodepth=1 00:10:35.903 norandommap=1 00:10:35.903 numjobs=1 00:10:35.903 00:10:35.903 [job0] 00:10:35.903 filename=/dev/nvme0n1 00:10:35.903 [job1] 00:10:35.903 filename=/dev/nvme0n2 00:10:35.903 [job2] 00:10:35.903 filename=/dev/nvme0n3 00:10:35.903 [job3] 00:10:35.903 filename=/dev/nvme0n4 00:10:35.903 Could not set queue depth (nvme0n1) 00:10:35.903 Could not set queue depth (nvme0n2) 00:10:35.903 Could not set queue depth (nvme0n3) 00:10:35.903 Could not set queue depth (nvme0n4) 00:10:35.903 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.903 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.903 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.903 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.903 fio-3.35 00:10:35.903 Starting 4 threads 00:10:39.186 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:39.186 fio: pid=70336, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.186 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46317568, buflen=4096 00:10:39.186 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:39.477 fio: pid=70335, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.477 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51695616, buflen=4096 00:10:39.477 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.477 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:39.734 fio: pid=70333, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.734 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8781824, buflen=4096 00:10:39.734 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.734 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:39.992 fio: pid=70334, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.992 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14487552, buflen=4096 00:10:39.992 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.992 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:39.992 00:10:39.992 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70333: Tue Oct 1 15:23:38 2024 00:10:39.992 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(72.4MiB/3644msec) 00:10:39.992 slat (usec): min=13, max=10283, avg=19.80, stdev=129.56 00:10:39.992 clat (usec): min=87, max=3668, avg=175.04, stdev=53.61 00:10:39.992 lat (usec): min=148, max=10558, avg=194.84, stdev=142.06 00:10:39.992 clat percentiles (usec): 00:10:39.992 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:39.992 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:39.992 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 206], 95.00th=[ 223], 00:10:39.992 | 99.00th=[ 371], 99.50th=[ 416], 99.90th=[ 529], 99.95th=[ 1106], 00:10:39.992 | 99.99th=[ 2147] 00:10:39.992 bw ( KiB/s): min=17664, max=21928, per=32.08%, avg=20397.00, stdev=1790.68, samples=7 00:10:39.992 iops : min= 4416, max= 5482, avg=5099.14, stdev=447.72, samples=7 00:10:39.992 lat (usec) : 100=0.01%, 250=98.41%, 500=1.47%, 750=0.04%, 1000=0.01% 00:10:39.992 lat (msec) : 2=0.04%, 4=0.02% 00:10:39.993 cpu : usr=1.59%, sys=7.58%, ctx=18544, majf=0, minf=1 00:10:39.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 issued rwts: total=18529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.993 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70334: Tue Oct 1 15:23:38 2024 00:10:39.993 read: IOPS=5075, BW=19.8MiB/s (20.8MB/s)(77.8MiB/3925msec) 00:10:39.993 slat (usec): min=13, max=11446, avg=18.91, stdev=152.58 00:10:39.993 clat (usec): min=136, max=14558, avg=176.59, stdev=112.73 00:10:39.993 lat (usec): min=152, max=14610, avg=195.50, stdev=190.32 00:10:39.993 clat percentiles (usec): 00:10:39.993 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:39.993 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:10:39.993 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 204], 00:10:39.993 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 441], 99.95th=[ 693], 00:10:39.993 | 99.99th=[ 3458] 00:10:39.993 bw ( KiB/s): min=19031, max=20704, per=32.03%, avg=20359.86, stdev=598.07, samples=7 00:10:39.993 iops : min= 4757, max= 5176, avg=5089.86, stdev=149.80, samples=7 00:10:39.993 lat (usec) : 250=99.04%, 500=0.88%, 750=0.03%, 1000=0.01% 00:10:39.993 lat (msec) : 2=0.02%, 4=0.02%, 20=0.01% 00:10:39.993 cpu : usr=1.45%, sys=6.88%, ctx=19929, majf=0, minf=1 00:10:39.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 issued rwts: total=19922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.993 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70335: Tue Oct 1 15:23:38 2024 00:10:39.993 read: IOPS=3757, BW=14.7MiB/s (15.4MB/s)(49.3MiB/3359msec) 00:10:39.993 slat (usec): min=8, max=7815, avg=17.88, stdev=94.34 00:10:39.993 clat (usec): min=148, max=3333, avg=246.62, stdev=63.55 00:10:39.993 lat (usec): min=164, max=8077, avg=264.50, stdev=112.93 00:10:39.993 clat percentiles (usec): 00:10:39.993 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 184], 00:10:39.993 | 30.00th=[ 200], 40.00th=[ 241], 50.00th=[ 262], 60.00th=[ 273], 00:10:39.993 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:10:39.993 | 99.00th=[ 363], 99.50th=[ 396], 99.90th=[ 449], 99.95th=[ 461], 00:10:39.993 | 99.99th=[ 2057] 00:10:39.993 bw ( KiB/s): min=12864, max=18096, per=23.01%, avg=14626.67, stdev=2136.12, samples=6 00:10:39.993 iops : min= 3216, max= 4524, avg=3656.67, stdev=534.03, samples=6 00:10:39.993 lat (usec) : 250=42.90%, 500=57.04%, 750=0.02% 00:10:39.993 lat (msec) : 2=0.02%, 4=0.02% 00:10:39.993 cpu : usr=1.28%, sys=5.69%, ctx=12625, majf=0, minf=1 00:10:39.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 issued rwts: total=12622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.993 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70336: Tue Oct 1 15:23:38 2024 00:10:39.993 read: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(44.2MiB/3021msec) 00:10:39.993 slat (nsec): min=8582, max=80742, avg=14974.83, stdev=4940.50 00:10:39.993 clat (usec): min=150, max=6303, avg=250.67, stdev=117.25 00:10:39.993 lat (usec): min=165, max=6317, avg=265.65, stdev=117.14 00:10:39.993 clat percentiles (usec): 00:10:39.993 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:10:39.993 | 30.00th=[ 190], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:10:39.993 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 322], 00:10:39.993 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 635], 99.95th=[ 3326], 00:10:39.993 | 99.99th=[ 4883] 00:10:39.993 bw ( KiB/s): min=12864, max=20656, per=23.61%, avg=15009.33, stdev=3013.46, samples=6 00:10:39.993 iops : min= 3216, max= 5164, avg=3752.33, stdev=753.36, samples=6 00:10:39.993 lat (usec) : 250=37.74%, 500=62.11%, 750=0.05% 00:10:39.993 lat (msec) : 2=0.02%, 4=0.04%, 10=0.04% 00:10:39.993 cpu : usr=1.03%, sys=5.10%, ctx=11311, majf=0, minf=2 00:10:39.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.993 issued rwts: total=11309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.993 00:10:39.993 Run status group 0 (all jobs): 00:10:39.993 READ: bw=62.1MiB/s (65.1MB/s), 14.6MiB/s-19.9MiB/s (15.3MB/s-20.8MB/s), io=244MiB (256MB), run=3021-3925msec 00:10:39.993 00:10:39.993 Disk stats (read/write): 00:10:39.993 nvme0n1: ios=18336/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.41% 00:10:39.993 nvme0n2: ios=19522/0, merge=0/0, ticks=3519/0, in_queue=3519, util=95.68% 00:10:39.993 nvme0n3: ios=12604/0, merge=0/0, ticks=3064/0, in_queue=3064, util=96.59% 00:10:39.993 nvme0n4: ios=10776/0, merge=0/0, ticks=2609/0, in_queue=2609, util=96.41% 00:10:40.251 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.251 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:40.509 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.509 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:40.768 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.768 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:41.334 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.334 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70292 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.593 nvmf hotplug test: fio failed as expected 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:41.593 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.156 rmmod nvme_tcp 00:10:42.156 rmmod nvme_fabrics 00:10:42.156 rmmod nvme_keyring 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 69802 ']' 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 69802 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 69802 ']' 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 69802 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69802 00:10:42.156 killing process with pid 69802 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69802' 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 69802 00:10:42.156 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 69802 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.415 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.672 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:42.672 ************************************ 00:10:42.672 END TEST nvmf_fio_target 00:10:42.672 ************************************ 00:10:42.672 00:10:42.672 real 0m20.659s 00:10:42.672 user 1m18.667s 00:10:42.672 sys 0m9.283s 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.673 ************************************ 00:10:42.673 START TEST nvmf_bdevio 00:10:42.673 ************************************ 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:42.673 * Looking for test storage... 00:10:42.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.673 --rc genhtml_branch_coverage=1 00:10:42.673 --rc genhtml_function_coverage=1 00:10:42.673 --rc genhtml_legend=1 00:10:42.673 --rc geninfo_all_blocks=1 00:10:42.673 --rc geninfo_unexecuted_blocks=1 00:10:42.673 00:10:42.673 ' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.673 --rc genhtml_branch_coverage=1 00:10:42.673 --rc genhtml_function_coverage=1 00:10:42.673 --rc genhtml_legend=1 00:10:42.673 --rc geninfo_all_blocks=1 00:10:42.673 --rc geninfo_unexecuted_blocks=1 00:10:42.673 00:10:42.673 ' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.673 --rc genhtml_branch_coverage=1 00:10:42.673 --rc genhtml_function_coverage=1 00:10:42.673 --rc genhtml_legend=1 00:10:42.673 --rc geninfo_all_blocks=1 00:10:42.673 --rc geninfo_unexecuted_blocks=1 00:10:42.673 00:10:42.673 ' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.673 --rc genhtml_branch_coverage=1 00:10:42.673 --rc genhtml_function_coverage=1 00:10:42.673 --rc genhtml_legend=1 00:10:42.673 --rc geninfo_all_blocks=1 00:10:42.673 --rc geninfo_unexecuted_blocks=1 00:10:42.673 00:10:42.673 ' 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.673 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:42.932 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.932 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.932 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.932 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.933 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:42.933 Cannot find device "nvmf_init_br" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:42.933 Cannot find device "nvmf_init_br2" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:42.933 Cannot find device "nvmf_tgt_br" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.933 Cannot find device "nvmf_tgt_br2" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:42.933 Cannot find device "nvmf_init_br" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:42.933 Cannot find device "nvmf_init_br2" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:42.933 Cannot find device "nvmf_tgt_br" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:42.933 Cannot find device "nvmf_tgt_br2" 00:10:42.933 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:42.934 Cannot find device "nvmf_br" 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:42.934 Cannot find device "nvmf_init_if" 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:42.934 Cannot find device "nvmf_init_if2" 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.934 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:42.934 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:43.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:43.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:10:43.204 00:10:43.204 --- 10.0.0.3 ping statistics --- 00:10:43.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.204 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:43.204 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:43.204 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:10:43.204 00:10:43.204 --- 10.0.0.4 ping statistics --- 00:10:43.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.204 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:43.204 00:10:43.204 --- 10.0.0.1 ping statistics --- 00:10:43.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.204 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:43.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:10:43.204 00:10:43.204 --- 10.0.0.2 ping statistics --- 00:10:43.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.204 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=70721 00:10:43.204 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 70721 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 70721 ']' 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.205 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 [2024-10-01 15:23:42.321914] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:10:43.205 [2024-10-01 15:23:42.322044] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.462 [2024-10-01 15:23:42.464901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.462 [2024-10-01 15:23:42.555932] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.462 [2024-10-01 15:23:42.556480] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.462 [2024-10-01 15:23:42.557164] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.462 [2024-10-01 15:23:42.557901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.462 [2024-10-01 15:23:42.558258] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.462 [2024-10-01 15:23:42.558904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:43.463 [2024-10-01 15:23:42.558997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:43.463 [2024-10-01 15:23:42.559081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:43.463 [2024-10-01 15:23:42.559095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 [2024-10-01 15:23:43.376045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 Malloc0 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 [2024-10-01 15:23:43.420782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:44.395 { 00:10:44.395 "params": { 00:10:44.395 "name": "Nvme$subsystem", 00:10:44.395 "trtype": "$TEST_TRANSPORT", 00:10:44.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:44.395 "adrfam": "ipv4", 00:10:44.395 "trsvcid": "$NVMF_PORT", 00:10:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:44.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:44.395 "hdgst": ${hdgst:-false}, 00:10:44.395 "ddgst": ${ddgst:-false} 00:10:44.395 }, 00:10:44.395 "method": "bdev_nvme_attach_controller" 00:10:44.395 } 00:10:44.395 EOF 00:10:44.395 )") 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:44.395 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:44.395 "params": { 00:10:44.395 "name": "Nvme1", 00:10:44.395 "trtype": "tcp", 00:10:44.395 "traddr": "10.0.0.3", 00:10:44.395 "adrfam": "ipv4", 00:10:44.395 "trsvcid": "4420", 00:10:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:44.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:44.395 "hdgst": false, 00:10:44.395 "ddgst": false 00:10:44.395 }, 00:10:44.395 "method": "bdev_nvme_attach_controller" 00:10:44.395 }' 00:10:44.395 [2024-10-01 15:23:43.474878] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:10:44.395 [2024-10-01 15:23:43.475383] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70780 ] 00:10:44.653 [2024-10-01 15:23:43.606105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.653 [2024-10-01 15:23:43.666654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.653 [2024-10-01 15:23:43.666728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.653 [2024-10-01 15:23:43.666736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.653 I/O targets: 00:10:44.653 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:44.653 00:10:44.653 00:10:44.653 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.653 http://cunit.sourceforge.net/ 00:10:44.653 00:10:44.653 00:10:44.653 Suite: bdevio tests on: Nvme1n1 00:10:44.910 Test: blockdev write read block ...passed 00:10:44.910 Test: blockdev write zeroes read block ...passed 00:10:44.910 Test: blockdev write zeroes read no split ...passed 00:10:44.910 Test: blockdev write zeroes read split ...passed 00:10:44.910 Test: blockdev write zeroes read split partial ...passed 00:10:44.910 Test: blockdev reset ...[2024-10-01 15:23:43.928395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:44.910 [2024-10-01 15:23:43.928570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a9b80 (9): Bad file descriptor 00:10:44.910 [2024-10-01 15:23:43.941470] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:44.910 passed 00:10:44.910 Test: blockdev write read 8 blocks ...passed 00:10:44.910 Test: blockdev write read size > 128k ...passed 00:10:44.910 Test: blockdev write read invalid size ...passed 00:10:44.910 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:44.910 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:44.910 Test: blockdev write read max offset ...passed 00:10:44.910 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:44.910 Test: blockdev writev readv 8 blocks ...passed 00:10:44.910 Test: blockdev writev readv 30 x 1block ...passed 00:10:45.169 Test: blockdev writev readv block ...passed 00:10:45.169 Test: blockdev writev readv size > 128k ...passed 00:10:45.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:45.169 Test: blockdev comparev and writev ...[2024-10-01 15:23:44.115135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.169 [2024-10-01 15:23:44.115200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:45.169 [2024-10-01 15:23:44.115224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.169 [2024-10-01 15:23:44.115236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:45.169 [2024-10-01 15:23:44.115854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.169 [2024-10-01 15:23:44.115895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.115923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.170 [2024-10-01 15:23:44.115935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.116405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.170 [2024-10-01 15:23:44.116446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.116465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.170 [2024-10-01 15:23:44.116476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.116986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.170 [2024-10-01 15:23:44.117038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.117069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.170 [2024-10-01 15:23:44.117090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:45.170 passed 00:10:45.170 Test: blockdev nvme passthru rw ...passed 00:10:45.170 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:23:44.199940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.170 [2024-10-01 15:23:44.200005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.200165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.170 [2024-10-01 15:23:44.200184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.200323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.170 [2024-10-01 15:23:44.200349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:45.170 [2024-10-01 15:23:44.200506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.170 [2024-10-01 15:23:44.200532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:45.170 passed 00:10:45.170 Test: blockdev nvme admin passthru ...passed 00:10:45.170 Test: blockdev copy ...passed 00:10:45.170 00:10:45.170 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.170 suites 1 1 n/a 0 0 00:10:45.170 tests 23 23 23 0 0 00:10:45.170 asserts 152 152 152 0 n/a 00:10:45.170 00:10:45.170 Elapsed time = 0.889 seconds 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.427 rmmod nvme_tcp 00:10:45.427 rmmod nvme_fabrics 00:10:45.427 rmmod nvme_keyring 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 70721 ']' 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 70721 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 70721 ']' 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 70721 00:10:45.427 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70721 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:45.428 killing process with pid 70721 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70721' 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 70721 00:10:45.428 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 70721 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:45.685 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:45.944 00:10:45.944 real 0m3.304s 00:10:45.944 user 0m10.686s 00:10:45.944 sys 0m0.816s 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.944 ************************************ 00:10:45.944 END TEST nvmf_bdevio 00:10:45.944 ************************************ 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:45.944 00:10:45.944 real 3m37.743s 00:10:45.944 user 11m29.947s 00:10:45.944 sys 1m1.206s 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.944 15:23:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.944 ************************************ 00:10:45.944 END TEST nvmf_target_core 00:10:45.944 ************************************ 00:10:45.944 15:23:45 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:45.944 15:23:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.944 15:23:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.944 15:23:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:45.944 ************************************ 00:10:45.944 START TEST nvmf_target_extra 00:10:45.944 ************************************ 00:10:45.944 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:45.944 * Looking for test storage... 00:10:45.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:45.944 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:45.944 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:45.944 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.202 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.202 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.202 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.202 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.202 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.202 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.203 --rc genhtml_branch_coverage=1 00:10:46.203 --rc genhtml_function_coverage=1 00:10:46.203 --rc genhtml_legend=1 00:10:46.203 --rc geninfo_all_blocks=1 00:10:46.203 --rc geninfo_unexecuted_blocks=1 00:10:46.203 00:10:46.203 ' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.203 --rc genhtml_branch_coverage=1 00:10:46.203 --rc genhtml_function_coverage=1 00:10:46.203 --rc genhtml_legend=1 00:10:46.203 --rc geninfo_all_blocks=1 00:10:46.203 --rc geninfo_unexecuted_blocks=1 00:10:46.203 00:10:46.203 ' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.203 --rc genhtml_branch_coverage=1 00:10:46.203 --rc genhtml_function_coverage=1 00:10:46.203 --rc genhtml_legend=1 00:10:46.203 --rc geninfo_all_blocks=1 00:10:46.203 --rc geninfo_unexecuted_blocks=1 00:10:46.203 00:10:46.203 ' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.203 --rc genhtml_branch_coverage=1 00:10:46.203 --rc genhtml_function_coverage=1 00:10:46.203 --rc genhtml_legend=1 00:10:46.203 --rc geninfo_all_blocks=1 00:10:46.203 --rc geninfo_unexecuted_blocks=1 00:10:46.203 00:10:46.203 ' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.203 ************************************ 00:10:46.203 START TEST nvmf_example 00:10:46.203 ************************************ 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:46.203 * Looking for test storage... 00:10:46.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.203 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.204 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.462 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.462 --rc genhtml_branch_coverage=1 00:10:46.462 --rc genhtml_function_coverage=1 00:10:46.462 --rc genhtml_legend=1 00:10:46.462 --rc geninfo_all_blocks=1 00:10:46.463 --rc geninfo_unexecuted_blocks=1 00:10:46.463 00:10:46.463 ' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.463 --rc genhtml_branch_coverage=1 00:10:46.463 --rc genhtml_function_coverage=1 00:10:46.463 --rc genhtml_legend=1 00:10:46.463 --rc geninfo_all_blocks=1 00:10:46.463 --rc geninfo_unexecuted_blocks=1 00:10:46.463 00:10:46.463 ' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.463 --rc genhtml_branch_coverage=1 00:10:46.463 --rc genhtml_function_coverage=1 00:10:46.463 --rc genhtml_legend=1 00:10:46.463 --rc geninfo_all_blocks=1 00:10:46.463 --rc geninfo_unexecuted_blocks=1 00:10:46.463 00:10:46.463 ' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.463 --rc genhtml_branch_coverage=1 00:10:46.463 --rc genhtml_function_coverage=1 00:10:46.463 --rc genhtml_legend=1 00:10:46.463 --rc geninfo_all_blocks=1 00:10:46.463 --rc geninfo_unexecuted_blocks=1 00:10:46.463 00:10:46.463 ' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.463 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.463 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:46.464 Cannot find device "nvmf_init_br" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:46.464 Cannot find device "nvmf_init_br2" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:46.464 Cannot find device "nvmf_tgt_br" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.464 Cannot find device "nvmf_tgt_br2" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:46.464 Cannot find device "nvmf_init_br" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:46.464 Cannot find device "nvmf_init_br2" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:46.464 Cannot find device "nvmf_tgt_br" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:46.464 Cannot find device "nvmf_tgt_br2" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:46.464 Cannot find device "nvmf_br" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:46.464 Cannot find device "nvmf_init_if" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:46.464 Cannot find device "nvmf_init_if2" 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.464 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:46.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:10:46.722 00:10:46.722 --- 10.0.0.3 ping statistics --- 00:10:46.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.722 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:46.722 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:46.722 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:10:46.722 00:10:46.722 --- 10.0.0.4 ping statistics --- 00:10:46.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.722 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:46.722 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:46.722 00:10:46.723 --- 10.0.0.1 ping statistics --- 00:10:46.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.723 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:46.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:46.723 00:10:46.723 --- 10.0.0.2 ping statistics --- 00:10:46.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.723 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # return 0 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71075 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71075 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 71075 ']' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.723 15:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:10:47.289 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:59.490 Initializing NVMe Controllers 00:10:59.490 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.490 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.490 Initialization complete. Launching workers. 00:10:59.490 ======================================================== 00:10:59.490 Latency(us) 00:10:59.490 Device Information : IOPS MiB/s Average min max 00:10:59.490 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13674.00 53.41 4683.54 947.22 22217.51 00:10:59.490 ======================================================== 00:10:59.490 Total : 13674.00 53.41 4683.54 947.22 22217.51 00:10:59.490 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.490 rmmod nvme_tcp 00:10:59.490 rmmod nvme_fabrics 00:10:59.490 rmmod nvme_keyring 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 71075 ']' 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 71075 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 71075 ']' 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 71075 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71075 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:59.490 killing process with pid 71075 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71075' 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 71075 00:10:59.490 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 71075 00:10:59.490 nvmf threads initialize successfully 00:10:59.490 bdev subsystem init successfully 00:10:59.490 created a nvmf target service 00:10:59.490 create targets's poll groups done 00:10:59.490 all subsystems of target started 00:10:59.491 nvmf target is running 00:10:59.491 all subsystems of target stopped 00:10:59.491 destroy targets's poll groups done 00:10:59.491 destroyed the nvmf target service 00:10:59.491 bdev subsystem finish successfully 00:10:59.491 nvmf threads destroy successfully 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:59.491 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.491 00:10:59.491 real 0m12.029s 00:10:59.491 user 0m41.507s 00:10:59.491 sys 0m2.033s 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.491 ************************************ 00:10:59.491 END TEST nvmf_example 00:10:59.491 ************************************ 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.491 ************************************ 00:10:59.491 START TEST nvmf_filesystem 00:10:59.491 ************************************ 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:59.491 * Looking for test storage... 00:10:59.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.491 --rc genhtml_branch_coverage=1 00:10:59.491 --rc genhtml_function_coverage=1 00:10:59.491 --rc genhtml_legend=1 00:10:59.491 --rc geninfo_all_blocks=1 00:10:59.491 --rc geninfo_unexecuted_blocks=1 00:10:59.491 00:10:59.491 ' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.491 --rc genhtml_branch_coverage=1 00:10:59.491 --rc genhtml_function_coverage=1 00:10:59.491 --rc genhtml_legend=1 00:10:59.491 --rc geninfo_all_blocks=1 00:10:59.491 --rc geninfo_unexecuted_blocks=1 00:10:59.491 00:10:59.491 ' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.491 --rc genhtml_branch_coverage=1 00:10:59.491 --rc genhtml_function_coverage=1 00:10:59.491 --rc genhtml_legend=1 00:10:59.491 --rc geninfo_all_blocks=1 00:10:59.491 --rc geninfo_unexecuted_blocks=1 00:10:59.491 00:10:59.491 ' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.491 --rc genhtml_branch_coverage=1 00:10:59.491 --rc genhtml_function_coverage=1 00:10:59.491 --rc genhtml_legend=1 00:10:59.491 --rc geninfo_all_blocks=1 00:10:59.491 --rc geninfo_unexecuted_blocks=1 00:10:59.491 00:10:59.491 ' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:59.491 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:59.492 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:59.493 #define SPDK_CONFIG_H 00:10:59.493 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:59.493 #define SPDK_CONFIG_APPS 1 00:10:59.493 #define SPDK_CONFIG_ARCH native 00:10:59.493 #undef SPDK_CONFIG_ASAN 00:10:59.493 #define SPDK_CONFIG_AVAHI 1 00:10:59.493 #undef SPDK_CONFIG_CET 00:10:59.493 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:59.493 #define SPDK_CONFIG_COVERAGE 1 00:10:59.493 #define SPDK_CONFIG_CROSS_PREFIX 00:10:59.493 #undef SPDK_CONFIG_CRYPTO 00:10:59.493 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:59.493 #undef SPDK_CONFIG_CUSTOMOCF 00:10:59.493 #undef SPDK_CONFIG_DAOS 00:10:59.493 #define SPDK_CONFIG_DAOS_DIR 00:10:59.493 #define SPDK_CONFIG_DEBUG 1 00:10:59.493 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:59.493 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:59.493 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:59.493 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:59.493 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:59.493 #undef SPDK_CONFIG_DPDK_UADK 00:10:59.493 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:59.493 #define SPDK_CONFIG_EXAMPLES 1 00:10:59.493 #undef SPDK_CONFIG_FC 00:10:59.493 #define SPDK_CONFIG_FC_PATH 00:10:59.493 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:59.493 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:59.493 #define SPDK_CONFIG_FSDEV 1 00:10:59.493 #undef SPDK_CONFIG_FUSE 00:10:59.493 #undef SPDK_CONFIG_FUZZER 00:10:59.493 #define SPDK_CONFIG_FUZZER_LIB 00:10:59.493 #define SPDK_CONFIG_GOLANG 1 00:10:59.493 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:59.493 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:59.493 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:59.493 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:59.493 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:59.493 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:59.493 #undef SPDK_CONFIG_HAVE_LZ4 00:10:59.493 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:59.493 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:59.493 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:59.493 #define SPDK_CONFIG_IDXD 1 00:10:59.493 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:59.493 #undef SPDK_CONFIG_IPSEC_MB 00:10:59.493 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:59.493 #define SPDK_CONFIG_ISAL 1 00:10:59.493 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:59.493 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:59.493 #define SPDK_CONFIG_LIBDIR 00:10:59.493 #undef SPDK_CONFIG_LTO 00:10:59.493 #define SPDK_CONFIG_MAX_LCORES 128 00:10:59.493 #define SPDK_CONFIG_NVME_CUSE 1 00:10:59.493 #undef SPDK_CONFIG_OCF 00:10:59.493 #define SPDK_CONFIG_OCF_PATH 00:10:59.493 #define SPDK_CONFIG_OPENSSL_PATH 00:10:59.493 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:59.493 #define SPDK_CONFIG_PGO_DIR 00:10:59.493 #undef SPDK_CONFIG_PGO_USE 00:10:59.493 #define SPDK_CONFIG_PREFIX /usr/local 00:10:59.493 #undef SPDK_CONFIG_RAID5F 00:10:59.493 #undef SPDK_CONFIG_RBD 00:10:59.493 #define SPDK_CONFIG_RDMA 1 00:10:59.493 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:59.493 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:59.493 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:59.493 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:59.493 #define SPDK_CONFIG_SHARED 1 00:10:59.493 #undef SPDK_CONFIG_SMA 00:10:59.493 #define SPDK_CONFIG_TESTS 1 00:10:59.493 #undef SPDK_CONFIG_TSAN 00:10:59.493 #define SPDK_CONFIG_UBLK 1 00:10:59.493 #define SPDK_CONFIG_UBSAN 1 00:10:59.493 #undef SPDK_CONFIG_UNIT_TESTS 00:10:59.493 #undef SPDK_CONFIG_URING 00:10:59.493 #define SPDK_CONFIG_URING_PATH 00:10:59.493 #undef SPDK_CONFIG_URING_ZNS 00:10:59.493 #define SPDK_CONFIG_USDT 1 00:10:59.493 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:59.493 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:59.493 #undef SPDK_CONFIG_VFIO_USER 00:10:59.493 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:59.493 #define SPDK_CONFIG_VHOST 1 00:10:59.493 #define SPDK_CONFIG_VIRTIO 1 00:10:59.493 #undef SPDK_CONFIG_VTUNE 00:10:59.493 #define SPDK_CONFIG_VTUNE_DIR 00:10:59.493 #define SPDK_CONFIG_WERROR 1 00:10:59.493 #define SPDK_CONFIG_WPDK_DIR 00:10:59.493 #undef SPDK_CONFIG_XNVME 00:10:59.493 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:59.493 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:59.494 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:59.495 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 71330 ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 71330 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.iW59HM 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.iW59HM/tests/target /tmp/spdk.iW59HM 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13986869248 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5581639680 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6256394240 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486431744 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506571776 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13986869248 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5581639680 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6266277888 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=147456 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253269504 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253281792 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora39-libvirt/output 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=94663680000 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5039099904 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:59.496 * Looking for test storage... 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13986869248 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:59.496 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.497 --rc genhtml_branch_coverage=1 00:10:59.497 --rc genhtml_function_coverage=1 00:10:59.497 --rc genhtml_legend=1 00:10:59.497 --rc geninfo_all_blocks=1 00:10:59.497 --rc geninfo_unexecuted_blocks=1 00:10:59.497 00:10:59.497 ' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.497 --rc genhtml_branch_coverage=1 00:10:59.497 --rc genhtml_function_coverage=1 00:10:59.497 --rc genhtml_legend=1 00:10:59.497 --rc geninfo_all_blocks=1 00:10:59.497 --rc geninfo_unexecuted_blocks=1 00:10:59.497 00:10:59.497 ' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.497 --rc genhtml_branch_coverage=1 00:10:59.497 --rc genhtml_function_coverage=1 00:10:59.497 --rc genhtml_legend=1 00:10:59.497 --rc geninfo_all_blocks=1 00:10:59.497 --rc geninfo_unexecuted_blocks=1 00:10:59.497 00:10:59.497 ' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.497 --rc genhtml_branch_coverage=1 00:10:59.497 --rc genhtml_function_coverage=1 00:10:59.497 --rc genhtml_legend=1 00:10:59.497 --rc geninfo_all_blocks=1 00:10:59.497 --rc geninfo_unexecuted_blocks=1 00:10:59.497 00:10:59.497 ' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.497 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:59.498 Cannot find device "nvmf_init_br" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:59.498 Cannot find device "nvmf_init_br2" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:59.498 Cannot find device "nvmf_tgt_br" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.498 Cannot find device "nvmf_tgt_br2" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:59.498 Cannot find device "nvmf_init_br" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:59.498 Cannot find device "nvmf_init_br2" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:59.498 Cannot find device "nvmf_tgt_br" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:59.498 Cannot find device "nvmf_tgt_br2" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:59.498 Cannot find device "nvmf_br" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:59.498 Cannot find device "nvmf_init_if" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:59.498 Cannot find device "nvmf_init_if2" 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:59.498 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:59.498 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:59.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:59.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:10:59.499 00:10:59.499 --- 10.0.0.3 ping statistics --- 00:10:59.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.499 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:59.499 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:59.499 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:59.499 00:10:59.499 --- 10.0.0.4 ping statistics --- 00:10:59.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.499 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:59.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:10:59.499 00:10:59.499 --- 10.0.0.1 ping statistics --- 00:10:59.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.499 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:59.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:10:59.499 00:10:59.499 --- 10.0.0.2 ping statistics --- 00:10:59.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.499 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # return 0 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.499 ************************************ 00:10:59.499 START TEST nvmf_filesystem_no_in_capsule 00:10:59.499 ************************************ 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=71518 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 71518 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 71518 ']' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.499 15:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.499 [2024-10-01 15:23:58.267767] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:10:59.499 [2024-10-01 15:23:58.267861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.499 [2024-10-01 15:23:58.405603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.499 [2024-10-01 15:23:58.479112] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.499 [2024-10-01 15:23:58.479175] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.499 [2024-10-01 15:23:58.479187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.499 [2024-10-01 15:23:58.479195] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.499 [2024-10-01 15:23:58.479203] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.499 [2024-10-01 15:23:58.479329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.499 [2024-10-01 15:23:58.479923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.499 [2024-10-01 15:23:58.479857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.499 [2024-10-01 15:23:58.479920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 [2024-10-01 15:23:59.380299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 [2024-10-01 15:23:59.495472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:00.432 { 00:11:00.432 "aliases": [ 00:11:00.432 "a5f0793f-da2f-4ec4-9b05-b8683ca2dd7c" 00:11:00.432 ], 00:11:00.432 "assigned_rate_limits": { 00:11:00.432 "r_mbytes_per_sec": 0, 00:11:00.432 "rw_ios_per_sec": 0, 00:11:00.432 "rw_mbytes_per_sec": 0, 00:11:00.432 "w_mbytes_per_sec": 0 00:11:00.432 }, 00:11:00.432 "block_size": 512, 00:11:00.432 "claim_type": "exclusive_write", 00:11:00.432 "claimed": true, 00:11:00.432 "driver_specific": {}, 00:11:00.432 "memory_domains": [ 00:11:00.432 { 00:11:00.432 "dma_device_id": "system", 00:11:00.432 "dma_device_type": 1 00:11:00.432 }, 00:11:00.432 { 00:11:00.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.432 "dma_device_type": 2 00:11:00.432 } 00:11:00.432 ], 00:11:00.432 "name": "Malloc1", 00:11:00.432 "num_blocks": 1048576, 00:11:00.432 "product_name": "Malloc disk", 00:11:00.432 "supported_io_types": { 00:11:00.432 "abort": true, 00:11:00.432 "compare": false, 00:11:00.432 "compare_and_write": false, 00:11:00.432 "copy": true, 00:11:00.432 "flush": true, 00:11:00.432 "get_zone_info": false, 00:11:00.432 "nvme_admin": false, 00:11:00.432 "nvme_io": false, 00:11:00.432 "nvme_io_md": false, 00:11:00.432 "nvme_iov_md": false, 00:11:00.433 "read": true, 00:11:00.433 "reset": true, 00:11:00.433 "seek_data": false, 00:11:00.433 "seek_hole": false, 00:11:00.433 "unmap": true, 00:11:00.433 "write": true, 00:11:00.433 "write_zeroes": true, 00:11:00.433 "zcopy": true, 00:11:00.433 "zone_append": false, 00:11:00.433 "zone_management": false 00:11:00.433 }, 00:11:00.433 "uuid": "a5f0793f-da2f-4ec4-9b05-b8683ca2dd7c", 00:11:00.433 "zoned": false 00:11:00.433 } 00:11:00.433 ]' 00:11:00.433 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:00.433 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:00.433 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:00.691 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:03.220 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:03.221 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 ************************************ 00:11:04.155 START TEST filesystem_ext4 00:11:04.155 ************************************ 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:04.155 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:04.155 mke2fs 1.47.0 (5-Feb-2023) 00:11:04.155 Discarding device blocks: 0/522240 done 00:11:04.155 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:04.155 Filesystem UUID: 57ba1b8f-0600-4b94-bc56-1c61aefce4b9 00:11:04.155 Superblock backups stored on blocks: 00:11:04.155 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:04.155 00:11:04.155 Allocating group tables: 0/64 done 00:11:04.155 Writing inode tables: 0/64 done 00:11:04.155 Creating journal (8192 blocks): done 00:11:04.155 Writing superblocks and filesystem accounting information: 0/64 done 00:11:04.155 00:11:04.155 15:24:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:04.155 15:24:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71518 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.466 00:11:09.466 real 0m5.609s 00:11:09.466 user 0m0.023s 00:11:09.466 sys 0m0.054s 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.466 ************************************ 00:11:09.466 END TEST filesystem_ext4 00:11:09.466 ************************************ 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.466 ************************************ 00:11:09.466 START TEST filesystem_btrfs 00:11:09.466 ************************************ 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:09.466 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.725 btrfs-progs v6.8.1 00:11:09.725 See https://btrfs.readthedocs.io for more information. 00:11:09.725 00:11:09.725 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.725 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.725 this does not affect your deployments: 00:11:09.725 - DUP for metadata (-m dup) 00:11:09.725 - enabled no-holes (-O no-holes) 00:11:09.725 - enabled free-space-tree (-R free-space-tree) 00:11:09.725 00:11:09.725 Label: (null) 00:11:09.725 UUID: 1e886114-d707-4033-a246-4ff62df362a6 00:11:09.725 Node size: 16384 00:11:09.725 Sector size: 4096 (CPU page size: 4096) 00:11:09.725 Filesystem size: 510.00MiB 00:11:09.725 Block group profiles: 00:11:09.725 Data: single 8.00MiB 00:11:09.725 Metadata: DUP 32.00MiB 00:11:09.725 System: DUP 8.00MiB 00:11:09.725 SSD detected: yes 00:11:09.725 Zoned device: no 00:11:09.725 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.725 Checksum: crc32c 00:11:09.725 Number of devices: 1 00:11:09.725 Devices: 00:11:09.725 ID SIZE PATH 00:11:09.725 1 510.00MiB /dev/nvme0n1p1 00:11:09.725 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71518 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.725 ************************************ 00:11:09.725 END TEST filesystem_btrfs 00:11:09.725 ************************************ 00:11:09.725 00:11:09.725 real 0m0.219s 00:11:09.725 user 0m0.018s 00:11:09.725 sys 0m0.061s 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.725 ************************************ 00:11:09.725 START TEST filesystem_xfs 00:11:09.725 ************************************ 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:09.725 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:09.726 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:09.726 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:09.726 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:09.726 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:09.726 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:09.988 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:09.988 = sectsz=512 attr=2, projid32bit=1 00:11:09.988 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:09.988 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:09.988 data = bsize=4096 blocks=130560, imaxpct=25 00:11:09.988 = sunit=0 swidth=0 blks 00:11:09.988 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:09.988 log =internal log bsize=4096 blocks=16384, version=2 00:11:09.988 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:09.988 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:10.553 Discarding blocks...Done. 00:11:10.553 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:10.553 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71518 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.177 ************************************ 00:11:13.177 END TEST filesystem_xfs 00:11:13.177 ************************************ 00:11:13.177 00:11:13.177 real 0m3.337s 00:11:13.177 user 0m0.023s 00:11:13.177 sys 0m0.050s 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:13.177 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71518 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 71518 ']' 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 71518 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71518 00:11:13.435 killing process with pid 71518 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71518' 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 71518 00:11:13.435 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 71518 00:11:13.693 ************************************ 00:11:13.693 END TEST nvmf_filesystem_no_in_capsule 00:11:13.693 ************************************ 00:11:13.693 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:13.693 00:11:13.693 real 0m14.512s 00:11:13.693 user 0m55.286s 00:11:13.693 sys 0m2.109s 00:11:13.693 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.693 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.693 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.694 ************************************ 00:11:13.694 START TEST nvmf_filesystem_in_capsule 00:11:13.694 ************************************ 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=71886 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 71886 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 71886 ']' 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.694 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.694 [2024-10-01 15:24:12.823911] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:11:13.694 [2024-10-01 15:24:12.824016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.952 [2024-10-01 15:24:12.960656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.952 [2024-10-01 15:24:13.049239] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.952 [2024-10-01 15:24:13.049332] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.952 [2024-10-01 15:24:13.049356] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.952 [2024-10-01 15:24:13.049371] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.952 [2024-10-01 15:24:13.049382] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.953 [2024-10-01 15:24:13.049508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.953 [2024-10-01 15:24:13.049628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.953 [2024-10-01 15:24:13.050159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.953 [2024-10-01 15:24:13.050185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 [2024-10-01 15:24:13.874699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 Malloc1 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 [2024-10-01 15:24:13.996685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:14.882 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:14.882 { 00:11:14.882 "aliases": [ 00:11:14.882 "cd372dba-af89-4509-8510-4d7566a943a4" 00:11:14.882 ], 00:11:14.882 "assigned_rate_limits": { 00:11:14.882 "r_mbytes_per_sec": 0, 00:11:14.882 "rw_ios_per_sec": 0, 00:11:14.882 "rw_mbytes_per_sec": 0, 00:11:14.882 "w_mbytes_per_sec": 0 00:11:14.882 }, 00:11:14.882 "block_size": 512, 00:11:14.882 "claim_type": "exclusive_write", 00:11:14.882 "claimed": true, 00:11:14.882 "driver_specific": {}, 00:11:14.882 "memory_domains": [ 00:11:14.882 { 00:11:14.882 "dma_device_id": "system", 00:11:14.882 "dma_device_type": 1 00:11:14.882 }, 00:11:14.882 { 00:11:14.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.882 "dma_device_type": 2 00:11:14.882 } 00:11:14.882 ], 00:11:14.882 "name": "Malloc1", 00:11:14.882 "num_blocks": 1048576, 00:11:14.882 "product_name": "Malloc disk", 00:11:14.882 "supported_io_types": { 00:11:14.882 "abort": true, 00:11:14.882 "compare": false, 00:11:14.882 "compare_and_write": false, 00:11:14.882 "copy": true, 00:11:14.882 "flush": true, 00:11:14.882 "get_zone_info": false, 00:11:14.882 "nvme_admin": false, 00:11:14.882 "nvme_io": false, 00:11:14.882 "nvme_io_md": false, 00:11:14.882 "nvme_iov_md": false, 00:11:14.882 "read": true, 00:11:14.882 "reset": true, 00:11:14.882 "seek_data": false, 00:11:14.882 "seek_hole": false, 00:11:14.882 "unmap": true, 00:11:14.882 "write": true, 00:11:14.882 "write_zeroes": true, 00:11:14.882 "zcopy": true, 00:11:14.882 "zone_append": false, 00:11:14.882 "zone_management": false 00:11:14.882 }, 00:11:14.882 "uuid": "cd372dba-af89-4509-8510-4d7566a943a4", 00:11:14.882 "zoned": false 00:11:14.882 } 00:11:14.882 ]' 00:11:14.882 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:15.139 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:17.685 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 ************************************ 00:11:18.617 START TEST filesystem_in_capsule_ext4 00:11:18.617 ************************************ 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:18.617 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:18.617 mke2fs 1.47.0 (5-Feb-2023) 00:11:18.617 Discarding device blocks: 0/522240 done 00:11:18.617 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:18.617 Filesystem UUID: af52bb19-18f1-4686-be18-619bd3f76def 00:11:18.617 Superblock backups stored on blocks: 00:11:18.617 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:18.617 00:11:18.617 Allocating group tables: 0/64 done 00:11:18.618 Writing inode tables: 0/64 done 00:11:18.618 Creating journal (8192 blocks): done 00:11:18.618 Writing superblocks and filesystem accounting information: 0/64 done 00:11:18.618 00:11:18.618 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:18.618 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.880 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.880 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71886 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.880 00:11:23.880 real 0m5.546s 00:11:23.880 user 0m0.027s 00:11:23.880 sys 0m0.050s 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.880 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:23.880 ************************************ 00:11:23.880 END TEST filesystem_in_capsule_ext4 00:11:23.880 ************************************ 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.139 ************************************ 00:11:24.139 START TEST filesystem_in_capsule_btrfs 00:11:24.139 ************************************ 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:24.139 btrfs-progs v6.8.1 00:11:24.139 See https://btrfs.readthedocs.io for more information. 00:11:24.139 00:11:24.139 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:24.139 NOTE: several default settings have changed in version 5.15, please make sure 00:11:24.139 this does not affect your deployments: 00:11:24.139 - DUP for metadata (-m dup) 00:11:24.139 - enabled no-holes (-O no-holes) 00:11:24.139 - enabled free-space-tree (-R free-space-tree) 00:11:24.139 00:11:24.139 Label: (null) 00:11:24.139 UUID: 2d40e0ec-7832-49d3-9e5f-e8cbd3dde0ab 00:11:24.139 Node size: 16384 00:11:24.139 Sector size: 4096 (CPU page size: 4096) 00:11:24.139 Filesystem size: 510.00MiB 00:11:24.139 Block group profiles: 00:11:24.139 Data: single 8.00MiB 00:11:24.139 Metadata: DUP 32.00MiB 00:11:24.139 System: DUP 8.00MiB 00:11:24.139 SSD detected: yes 00:11:24.139 Zoned device: no 00:11:24.139 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:24.139 Checksum: crc32c 00:11:24.139 Number of devices: 1 00:11:24.139 Devices: 00:11:24.139 ID SIZE PATH 00:11:24.139 1 510.00MiB /dev/nvme0n1p1 00:11:24.139 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71886 00:11:24.139 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.140 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.140 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.140 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.140 00:11:24.140 real 0m0.215s 00:11:24.140 user 0m0.021s 00:11:24.140 sys 0m0.053s 00:11:24.140 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.140 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:24.140 ************************************ 00:11:24.140 END TEST filesystem_in_capsule_btrfs 00:11:24.140 ************************************ 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.398 ************************************ 00:11:24.398 START TEST filesystem_in_capsule_xfs 00:11:24.398 ************************************ 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:24.398 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:24.398 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:24.398 = sectsz=512 attr=2, projid32bit=1 00:11:24.398 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:24.398 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:24.398 data = bsize=4096 blocks=130560, imaxpct=25 00:11:24.398 = sunit=0 swidth=0 blks 00:11:24.398 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:24.398 log =internal log bsize=4096 blocks=16384, version=2 00:11:24.398 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:24.398 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:24.964 Discarding blocks...Done. 00:11:24.964 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:24.964 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71886 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.863 00:11:26.863 real 0m2.574s 00:11:26.863 user 0m0.015s 00:11:26.863 sys 0m0.051s 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:26.863 ************************************ 00:11:26.863 END TEST filesystem_in_capsule_xfs 00:11:26.863 ************************************ 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:26.863 15:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.863 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71886 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 71886 ']' 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 71886 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71886 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.122 killing process with pid 71886 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71886' 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 71886 00:11:27.122 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 71886 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:27.381 00:11:27.381 real 0m13.570s 00:11:27.381 user 0m51.744s 00:11:27.381 sys 0m1.953s 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.381 ************************************ 00:11:27.381 END TEST nvmf_filesystem_in_capsule 00:11:27.381 ************************************ 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.381 rmmod nvme_tcp 00:11:27.381 rmmod nvme_fabrics 00:11:27.381 rmmod nvme_keyring 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:27.381 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:27.639 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.639 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:27.639 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:11:27.640 00:11:27.640 real 0m29.427s 00:11:27.640 user 1m47.539s 00:11:27.640 sys 0m4.571s 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.640 ************************************ 00:11:27.640 END TEST nvmf_filesystem 00:11:27.640 ************************************ 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.640 ************************************ 00:11:27.640 START TEST nvmf_target_discovery 00:11:27.640 ************************************ 00:11:27.640 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:27.899 * Looking for test storage... 00:11:27.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.899 --rc genhtml_branch_coverage=1 00:11:27.899 --rc genhtml_function_coverage=1 00:11:27.899 --rc genhtml_legend=1 00:11:27.899 --rc geninfo_all_blocks=1 00:11:27.899 --rc geninfo_unexecuted_blocks=1 00:11:27.899 00:11:27.899 ' 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.899 --rc genhtml_branch_coverage=1 00:11:27.899 --rc genhtml_function_coverage=1 00:11:27.899 --rc genhtml_legend=1 00:11:27.899 --rc geninfo_all_blocks=1 00:11:27.899 --rc geninfo_unexecuted_blocks=1 00:11:27.899 00:11:27.899 ' 00:11:27.899 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.899 --rc genhtml_branch_coverage=1 00:11:27.899 --rc genhtml_function_coverage=1 00:11:27.900 --rc genhtml_legend=1 00:11:27.900 --rc geninfo_all_blocks=1 00:11:27.900 --rc geninfo_unexecuted_blocks=1 00:11:27.900 00:11:27.900 ' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.900 --rc genhtml_branch_coverage=1 00:11:27.900 --rc genhtml_function_coverage=1 00:11:27.900 --rc genhtml_legend=1 00:11:27.900 --rc geninfo_all_blocks=1 00:11:27.900 --rc geninfo_unexecuted_blocks=1 00:11:27.900 00:11:27.900 ' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:27.900 Cannot find device "nvmf_init_br" 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:27.900 Cannot find device "nvmf_init_br2" 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:27.900 Cannot find device "nvmf_tgt_br" 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.900 Cannot find device "nvmf_tgt_br2" 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:11:27.900 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:27.901 Cannot find device "nvmf_init_br" 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:27.901 Cannot find device "nvmf_init_br2" 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:27.901 Cannot find device "nvmf_tgt_br" 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:27.901 Cannot find device "nvmf_tgt_br2" 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:27.901 Cannot find device "nvmf_br" 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:27.901 Cannot find device "nvmf_init_if" 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:11:27.901 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:28.160 Cannot find device "nvmf_init_if2" 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:28.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:11:28.160 00:11:28.160 --- 10.0.0.3 ping statistics --- 00:11:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.160 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:28.160 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:28.160 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:11:28.160 00:11:28.160 --- 10.0.0.4 ping statistics --- 00:11:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.160 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:11:28.160 00:11:28.160 --- 10.0.0.1 ping statistics --- 00:11:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.160 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:28.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:11:28.160 00:11:28.160 --- 10.0.0.2 ping statistics --- 00:11:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.160 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # return 0 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=72463 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 72463 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 72463 ']' 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.160 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.161 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.161 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.161 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.419 [2024-10-01 15:24:27.396807] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:11:28.419 [2024-10-01 15:24:27.396905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.419 [2024-10-01 15:24:27.536226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.678 [2024-10-01 15:24:27.606157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.678 [2024-10-01 15:24:27.606228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.678 [2024-10-01 15:24:27.606243] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.678 [2024-10-01 15:24:27.606253] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.678 [2024-10-01 15:24:27.606262] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.678 [2024-10-01 15:24:27.606465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.678 [2024-10-01 15:24:27.606518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.678 [2024-10-01 15:24:27.607112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.678 [2024-10-01 15:24:27.607132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.678 [2024-10-01 15:24:27.755854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.678 Null1 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.678 [2024-10-01 15:24:27.800046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:28.678 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 Null2 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 Null3 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.679 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 Null4 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.938 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 4420 00:11:28.938 00:11:28.938 Discovery Log Number of Records 6, Generation counter 6 00:11:28.938 =====Discovery Log Entry 0====== 00:11:28.938 trtype: tcp 00:11:28.938 adrfam: ipv4 00:11:28.938 subtype: current discovery subsystem 00:11:28.938 treq: not required 00:11:28.938 portid: 0 00:11:28.938 trsvcid: 4420 00:11:28.938 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.938 traddr: 10.0.0.3 00:11:28.938 eflags: explicit discovery connections, duplicate discovery information 00:11:28.938 sectype: none 00:11:28.938 =====Discovery Log Entry 1====== 00:11:28.938 trtype: tcp 00:11:28.938 adrfam: ipv4 00:11:28.938 subtype: nvme subsystem 00:11:28.938 treq: not required 00:11:28.938 portid: 0 00:11:28.938 trsvcid: 4420 00:11:28.938 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.938 traddr: 10.0.0.3 00:11:28.938 eflags: none 00:11:28.938 sectype: none 00:11:28.938 =====Discovery Log Entry 2====== 00:11:28.938 trtype: tcp 00:11:28.938 adrfam: ipv4 00:11:28.938 subtype: nvme subsystem 00:11:28.938 treq: not required 00:11:28.938 portid: 0 00:11:28.938 trsvcid: 4420 00:11:28.938 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:28.938 traddr: 10.0.0.3 00:11:28.938 eflags: none 00:11:28.938 sectype: none 00:11:28.938 =====Discovery Log Entry 3====== 00:11:28.938 trtype: tcp 00:11:28.938 adrfam: ipv4 00:11:28.938 subtype: nvme subsystem 00:11:28.938 treq: not required 00:11:28.938 portid: 0 00:11:28.938 trsvcid: 4420 00:11:28.938 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:28.938 traddr: 10.0.0.3 00:11:28.938 eflags: none 00:11:28.938 sectype: none 00:11:28.938 =====Discovery Log Entry 4====== 00:11:28.938 trtype: tcp 00:11:28.938 adrfam: ipv4 00:11:28.938 subtype: nvme subsystem 00:11:28.938 treq: not required 00:11:28.938 portid: 0 00:11:28.938 trsvcid: 4420 00:11:28.938 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:28.938 traddr: 10.0.0.3 00:11:28.938 eflags: none 00:11:28.938 sectype: none 00:11:28.938 =====Discovery Log Entry 5====== 00:11:28.938 trtype: tcp 00:11:28.938 adrfam: ipv4 00:11:28.938 subtype: discovery subsystem referral 00:11:28.938 treq: not required 00:11:28.938 portid: 0 00:11:28.938 trsvcid: 4430 00:11:28.938 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.938 traddr: 10.0.0.3 00:11:28.938 eflags: none 00:11:28.938 sectype: none 00:11:28.938 Perform nvmf subsystem discovery via RPC 00:11:28.938 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:28.938 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:28.938 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.938 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 [ 00:11:28.938 { 00:11:28.938 "allow_any_host": true, 00:11:28.939 "hosts": [], 00:11:28.939 "listen_addresses": [ 00:11:28.939 { 00:11:28.939 "adrfam": "IPv4", 00:11:28.939 "traddr": "10.0.0.3", 00:11:28.939 "trsvcid": "4420", 00:11:28.939 "trtype": "TCP" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.939 "subtype": "Discovery" 00:11:28.939 }, 00:11:28.939 { 00:11:28.939 "allow_any_host": true, 00:11:28.939 "hosts": [], 00:11:28.939 "listen_addresses": [ 00:11:28.939 { 00:11:28.939 "adrfam": "IPv4", 00:11:28.939 "traddr": "10.0.0.3", 00:11:28.939 "trsvcid": "4420", 00:11:28.939 "trtype": "TCP" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "max_cntlid": 65519, 00:11:28.939 "max_namespaces": 32, 00:11:28.939 "min_cntlid": 1, 00:11:28.939 "model_number": "SPDK bdev Controller", 00:11:28.939 "namespaces": [ 00:11:28.939 { 00:11:28.939 "bdev_name": "Null1", 00:11:28.939 "name": "Null1", 00:11:28.939 "nguid": "A2AC4D57A5BE4310944C81C7153F928F", 00:11:28.939 "nsid": 1, 00:11:28.939 "uuid": "a2ac4d57-a5be-4310-944c-81c7153f928f" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.939 "serial_number": "SPDK00000000000001", 00:11:28.939 "subtype": "NVMe" 00:11:28.939 }, 00:11:28.939 { 00:11:28.939 "allow_any_host": true, 00:11:28.939 "hosts": [], 00:11:28.939 "listen_addresses": [ 00:11:28.939 { 00:11:28.939 "adrfam": "IPv4", 00:11:28.939 "traddr": "10.0.0.3", 00:11:28.939 "trsvcid": "4420", 00:11:28.939 "trtype": "TCP" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "max_cntlid": 65519, 00:11:28.939 "max_namespaces": 32, 00:11:28.939 "min_cntlid": 1, 00:11:28.939 "model_number": "SPDK bdev Controller", 00:11:28.939 "namespaces": [ 00:11:28.939 { 00:11:28.939 "bdev_name": "Null2", 00:11:28.939 "name": "Null2", 00:11:28.939 "nguid": "246EE9342E8742259192AD7C611E42DC", 00:11:28.939 "nsid": 1, 00:11:28.939 "uuid": "246ee934-2e87-4225-9192-ad7c611e42dc" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.939 "serial_number": "SPDK00000000000002", 00:11:28.939 "subtype": "NVMe" 00:11:28.939 }, 00:11:28.939 { 00:11:28.939 "allow_any_host": true, 00:11:28.939 "hosts": [], 00:11:28.939 "listen_addresses": [ 00:11:28.939 { 00:11:28.939 "adrfam": "IPv4", 00:11:28.939 "traddr": "10.0.0.3", 00:11:28.939 "trsvcid": "4420", 00:11:28.939 "trtype": "TCP" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "max_cntlid": 65519, 00:11:28.939 "max_namespaces": 32, 00:11:28.939 "min_cntlid": 1, 00:11:28.939 "model_number": "SPDK bdev Controller", 00:11:28.939 "namespaces": [ 00:11:28.939 { 00:11:28.939 "bdev_name": "Null3", 00:11:28.939 "name": "Null3", 00:11:28.939 "nguid": "F61B75AE16F84A78AA6E8BD8DF08B3B5", 00:11:28.939 "nsid": 1, 00:11:28.939 "uuid": "f61b75ae-16f8-4a78-aa6e-8bd8df08b3b5" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:28.939 "serial_number": "SPDK00000000000003", 00:11:28.939 "subtype": "NVMe" 00:11:28.939 }, 00:11:28.939 { 00:11:28.939 "allow_any_host": true, 00:11:28.939 "hosts": [], 00:11:28.939 "listen_addresses": [ 00:11:28.939 { 00:11:28.939 "adrfam": "IPv4", 00:11:28.939 "traddr": "10.0.0.3", 00:11:28.939 "trsvcid": "4420", 00:11:28.939 "trtype": "TCP" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "max_cntlid": 65519, 00:11:28.939 "max_namespaces": 32, 00:11:28.939 "min_cntlid": 1, 00:11:28.939 "model_number": "SPDK bdev Controller", 00:11:28.939 "namespaces": [ 00:11:28.939 { 00:11:28.939 "bdev_name": "Null4", 00:11:28.939 "name": "Null4", 00:11:28.939 "nguid": "D0DB021B32894425B17A37A69F30ABA4", 00:11:28.939 "nsid": 1, 00:11:28.939 "uuid": "d0db021b-3289-4425-b17a-37a69f30aba4" 00:11:28.939 } 00:11:28.939 ], 00:11:28.939 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:28.939 "serial_number": "SPDK00000000000004", 00:11:28.939 "subtype": "NVMe" 00:11:28.939 } 00:11:28.939 ] 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.939 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.198 rmmod nvme_tcp 00:11:29.198 rmmod nvme_fabrics 00:11:29.198 rmmod nvme_keyring 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 72463 ']' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 72463 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 72463 ']' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 72463 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72463 00:11:29.198 killing process with pid 72463 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72463' 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 72463 00:11:29.198 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 72463 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:29.457 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:29.716 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.716 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.716 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:11:29.717 00:11:29.717 real 0m1.947s 00:11:29.717 user 0m3.700s 00:11:29.717 sys 0m0.653s 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.717 ************************************ 00:11:29.717 END TEST nvmf_target_discovery 00:11:29.717 ************************************ 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.717 ************************************ 00:11:29.717 START TEST nvmf_referrals 00:11:29.717 ************************************ 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.717 * Looking for test storage... 00:11:29.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:29.717 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.976 --rc genhtml_branch_coverage=1 00:11:29.976 --rc genhtml_function_coverage=1 00:11:29.976 --rc genhtml_legend=1 00:11:29.976 --rc geninfo_all_blocks=1 00:11:29.976 --rc geninfo_unexecuted_blocks=1 00:11:29.976 00:11:29.976 ' 00:11:29.976 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.976 --rc genhtml_branch_coverage=1 00:11:29.977 --rc genhtml_function_coverage=1 00:11:29.977 --rc genhtml_legend=1 00:11:29.977 --rc geninfo_all_blocks=1 00:11:29.977 --rc geninfo_unexecuted_blocks=1 00:11:29.977 00:11:29.977 ' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.977 --rc genhtml_branch_coverage=1 00:11:29.977 --rc genhtml_function_coverage=1 00:11:29.977 --rc genhtml_legend=1 00:11:29.977 --rc geninfo_all_blocks=1 00:11:29.977 --rc geninfo_unexecuted_blocks=1 00:11:29.977 00:11:29.977 ' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:29.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.977 --rc genhtml_branch_coverage=1 00:11:29.977 --rc genhtml_function_coverage=1 00:11:29.977 --rc genhtml_legend=1 00:11:29.977 --rc geninfo_all_blocks=1 00:11:29.977 --rc geninfo_unexecuted_blocks=1 00:11:29.977 00:11:29.977 ' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:29.977 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:29.978 Cannot find device "nvmf_init_br" 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:29.978 Cannot find device "nvmf_init_br2" 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:11:29.978 15:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:29.978 Cannot find device "nvmf_tgt_br" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.978 Cannot find device "nvmf_tgt_br2" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:29.978 Cannot find device "nvmf_init_br" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:29.978 Cannot find device "nvmf_init_br2" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:29.978 Cannot find device "nvmf_tgt_br" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:29.978 Cannot find device "nvmf_tgt_br2" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:29.978 Cannot find device "nvmf_br" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:29.978 Cannot find device "nvmf_init_if" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:29.978 Cannot find device "nvmf_init_if2" 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:29.978 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.979 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:30.238 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:30.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:30.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:11:30.239 00:11:30.239 --- 10.0.0.3 ping statistics --- 00:11:30.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.239 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:30.239 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:30.239 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:11:30.239 00:11:30.239 --- 10.0.0.4 ping statistics --- 00:11:30.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.239 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:30.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:30.239 00:11:30.239 --- 10.0.0.1 ping statistics --- 00:11:30.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.239 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:30.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:30.239 00:11:30.239 --- 10.0.0.2 ping statistics --- 00:11:30.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.239 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # return 0 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:30.239 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=72732 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 72732 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 72732 ']' 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.497 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.497 [2024-10-01 15:24:29.478152] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:11:30.497 [2024-10-01 15:24:29.478253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.497 [2024-10-01 15:24:29.645141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.757 [2024-10-01 15:24:29.719409] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.757 [2024-10-01 15:24:29.719477] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.757 [2024-10-01 15:24:29.719489] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.757 [2024-10-01 15:24:29.719498] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.757 [2024-10-01 15:24:29.719505] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.757 [2024-10-01 15:24:29.719605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.757 [2024-10-01 15:24:29.719881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.757 [2024-10-01 15:24:29.720297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.757 [2024-10-01 15:24:29.720312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 [2024-10-01 15:24:29.856589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 [2024-10-01 15:24:29.877857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:30.757 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.015 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.016 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.016 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.016 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.274 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:31.274 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:31.274 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.274 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.275 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.533 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.792 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:31.793 15:24:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.051 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.309 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:32.309 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:32.309 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:32.309 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.310 rmmod nvme_tcp 00:11:32.310 rmmod nvme_fabrics 00:11:32.310 rmmod nvme_keyring 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 72732 ']' 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 72732 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 72732 ']' 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 72732 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.310 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72732 00:11:32.568 killing process with pid 72732 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72732' 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 72732 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 72732 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:32.568 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:11:32.827 00:11:32.827 real 0m3.140s 00:11:32.827 user 0m8.773s 00:11:32.827 sys 0m0.888s 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 ************************************ 00:11:32.827 END TEST nvmf_referrals 00:11:32.827 ************************************ 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 ************************************ 00:11:32.827 START TEST nvmf_connect_disconnect 00:11:32.827 ************************************ 00:11:32.827 15:24:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:33.087 * Looking for test storage... 00:11:33.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:33.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.087 --rc genhtml_branch_coverage=1 00:11:33.087 --rc genhtml_function_coverage=1 00:11:33.087 --rc genhtml_legend=1 00:11:33.087 --rc geninfo_all_blocks=1 00:11:33.087 --rc geninfo_unexecuted_blocks=1 00:11:33.087 00:11:33.087 ' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:33.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.087 --rc genhtml_branch_coverage=1 00:11:33.087 --rc genhtml_function_coverage=1 00:11:33.087 --rc genhtml_legend=1 00:11:33.087 --rc geninfo_all_blocks=1 00:11:33.087 --rc geninfo_unexecuted_blocks=1 00:11:33.087 00:11:33.087 ' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:33.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.087 --rc genhtml_branch_coverage=1 00:11:33.087 --rc genhtml_function_coverage=1 00:11:33.087 --rc genhtml_legend=1 00:11:33.087 --rc geninfo_all_blocks=1 00:11:33.087 --rc geninfo_unexecuted_blocks=1 00:11:33.087 00:11:33.087 ' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:33.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.087 --rc genhtml_branch_coverage=1 00:11:33.087 --rc genhtml_function_coverage=1 00:11:33.087 --rc genhtml_legend=1 00:11:33.087 --rc geninfo_all_blocks=1 00:11:33.087 --rc geninfo_unexecuted_blocks=1 00:11:33.087 00:11:33.087 ' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.087 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:33.088 Cannot find device "nvmf_init_br" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:33.088 Cannot find device "nvmf_init_br2" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:33.088 Cannot find device "nvmf_tgt_br" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.088 Cannot find device "nvmf_tgt_br2" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:33.088 Cannot find device "nvmf_init_br" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:33.088 Cannot find device "nvmf_init_br2" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:33.088 Cannot find device "nvmf_tgt_br" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:33.088 Cannot find device "nvmf_tgt_br2" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:33.088 Cannot find device "nvmf_br" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:33.088 Cannot find device "nvmf_init_if" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:33.088 Cannot find device "nvmf_init_if2" 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:33.088 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:33.347 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:33.347 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:33.347 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:33.347 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:33.347 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:33.347 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:33.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:33.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:11:33.348 00:11:33.348 --- 10.0.0.3 ping statistics --- 00:11:33.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.348 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:33.348 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:33.348 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:11:33.348 00:11:33.348 --- 10.0.0.4 ping statistics --- 00:11:33.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.348 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:33.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:33.348 00:11:33.348 --- 10.0.0.1 ping statistics --- 00:11:33.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.348 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:33.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:33.348 00:11:33.348 --- 10.0.0.2 ping statistics --- 00:11:33.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.348 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # return 0 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:33.348 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=73078 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 73078 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 73078 ']' 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.607 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:33.607 [2024-10-01 15:24:32.600895] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:11:33.607 [2024-10-01 15:24:32.600980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.607 [2024-10-01 15:24:32.769546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.865 [2024-10-01 15:24:32.848232] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.865 [2024-10-01 15:24:32.848677] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.865 [2024-10-01 15:24:32.848933] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.865 [2024-10-01 15:24:32.849156] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.865 [2024-10-01 15:24:32.849350] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.865 [2024-10-01 15:24:32.849636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.865 [2024-10-01 15:24:32.849724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.865 [2024-10-01 15:24:32.850311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.865 [2024-10-01 15:24:32.850327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 [2024-10-01 15:24:33.718003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:34.797 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.798 [2024-10-01 15:24:33.769363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:34.798 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:37.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.350 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:46.350 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:46.350 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:46.350 15:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.350 rmmod nvme_tcp 00:11:46.350 rmmod nvme_fabrics 00:11:46.350 rmmod nvme_keyring 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 73078 ']' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 73078 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 73078 ']' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 73078 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73078 00:11:46.350 killing process with pid 73078 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73078' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 73078 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 73078 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:46.350 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:11:46.351 00:11:46.351 real 0m13.566s 00:11:46.351 user 0m48.716s 00:11:46.351 sys 0m2.021s 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.351 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.351 ************************************ 00:11:46.351 END TEST nvmf_connect_disconnect 00:11:46.351 ************************************ 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.609 ************************************ 00:11:46.609 START TEST nvmf_multitarget 00:11:46.609 ************************************ 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:46.609 * Looking for test storage... 00:11:46.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.609 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:46.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.610 --rc genhtml_branch_coverage=1 00:11:46.610 --rc genhtml_function_coverage=1 00:11:46.610 --rc genhtml_legend=1 00:11:46.610 --rc geninfo_all_blocks=1 00:11:46.610 --rc geninfo_unexecuted_blocks=1 00:11:46.610 00:11:46.610 ' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:46.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.610 --rc genhtml_branch_coverage=1 00:11:46.610 --rc genhtml_function_coverage=1 00:11:46.610 --rc genhtml_legend=1 00:11:46.610 --rc geninfo_all_blocks=1 00:11:46.610 --rc geninfo_unexecuted_blocks=1 00:11:46.610 00:11:46.610 ' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:46.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.610 --rc genhtml_branch_coverage=1 00:11:46.610 --rc genhtml_function_coverage=1 00:11:46.610 --rc genhtml_legend=1 00:11:46.610 --rc geninfo_all_blocks=1 00:11:46.610 --rc geninfo_unexecuted_blocks=1 00:11:46.610 00:11:46.610 ' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:46.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.610 --rc genhtml_branch_coverage=1 00:11:46.610 --rc genhtml_function_coverage=1 00:11:46.610 --rc genhtml_legend=1 00:11:46.610 --rc geninfo_all_blocks=1 00:11:46.610 --rc geninfo_unexecuted_blocks=1 00:11:46.610 00:11:46.610 ' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.610 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:46.611 Cannot find device "nvmf_init_br" 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:11:46.611 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:46.868 Cannot find device "nvmf_init_br2" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:46.868 Cannot find device "nvmf_tgt_br" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.868 Cannot find device "nvmf_tgt_br2" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:46.868 Cannot find device "nvmf_init_br" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:46.868 Cannot find device "nvmf_init_br2" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:46.868 Cannot find device "nvmf_tgt_br" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:46.868 Cannot find device "nvmf_tgt_br2" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:46.868 Cannot find device "nvmf_br" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:46.868 Cannot find device "nvmf_init_if" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:46.868 Cannot find device "nvmf_init_if2" 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:46.868 15:24:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:46.868 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:47.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:47.126 00:11:47.126 --- 10.0.0.3 ping statistics --- 00:11:47.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.126 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:47.126 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:47.126 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:47.126 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:11:47.126 00:11:47.126 --- 10.0.0.4 ping statistics --- 00:11:47.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.127 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:47.127 00:11:47.127 --- 10.0.0.1 ping statistics --- 00:11:47.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.127 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:47.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:47.127 00:11:47.127 --- 10.0.0.2 ping statistics --- 00:11:47.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.127 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # return 0 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=73528 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 73528 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 73528 ']' 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.127 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:47.127 [2024-10-01 15:24:46.267058] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:11:47.127 [2024-10-01 15:24:46.267160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.385 [2024-10-01 15:24:46.403573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.385 [2024-10-01 15:24:46.473923] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.385 [2024-10-01 15:24:46.473983] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.385 [2024-10-01 15:24:46.473997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.385 [2024-10-01 15:24:46.474007] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.385 [2024-10-01 15:24:46.474017] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.385 [2024-10-01 15:24:46.474160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.385 [2024-10-01 15:24:46.474225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.385 [2024-10-01 15:24:46.474535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.385 [2024-10-01 15:24:46.474545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:47.643 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:47.901 "nvmf_tgt_1" 00:11:47.901 15:24:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:47.901 "nvmf_tgt_2" 00:11:48.158 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:48.158 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:48.158 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:48.158 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:48.415 true 00:11:48.415 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:48.415 true 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.673 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.673 rmmod nvme_tcp 00:11:48.930 rmmod nvme_fabrics 00:11:48.930 rmmod nvme_keyring 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 73528 ']' 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 73528 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 73528 ']' 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 73528 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73528 00:11:48.930 killing process with pid 73528 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73528' 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 73528 00:11:48.930 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 73528 00:11:48.930 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:48.930 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:48.930 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:48.930 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:11:49.192 00:11:49.192 real 0m2.799s 00:11:49.192 user 0m7.882s 00:11:49.192 sys 0m0.754s 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.192 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.192 ************************************ 00:11:49.192 END TEST nvmf_multitarget 00:11:49.192 ************************************ 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.450 ************************************ 00:11:49.450 START TEST nvmf_rpc 00:11:49.450 ************************************ 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:49.450 * Looking for test storage... 00:11:49.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:49.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.450 --rc genhtml_branch_coverage=1 00:11:49.450 --rc genhtml_function_coverage=1 00:11:49.450 --rc genhtml_legend=1 00:11:49.450 --rc geninfo_all_blocks=1 00:11:49.450 --rc geninfo_unexecuted_blocks=1 00:11:49.450 00:11:49.450 ' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:49.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.450 --rc genhtml_branch_coverage=1 00:11:49.450 --rc genhtml_function_coverage=1 00:11:49.450 --rc genhtml_legend=1 00:11:49.450 --rc geninfo_all_blocks=1 00:11:49.450 --rc geninfo_unexecuted_blocks=1 00:11:49.450 00:11:49.450 ' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:49.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.450 --rc genhtml_branch_coverage=1 00:11:49.450 --rc genhtml_function_coverage=1 00:11:49.450 --rc genhtml_legend=1 00:11:49.450 --rc geninfo_all_blocks=1 00:11:49.450 --rc geninfo_unexecuted_blocks=1 00:11:49.450 00:11:49.450 ' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:49.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.450 --rc genhtml_branch_coverage=1 00:11:49.450 --rc genhtml_function_coverage=1 00:11:49.450 --rc genhtml_legend=1 00:11:49.450 --rc geninfo_all_blocks=1 00:11:49.450 --rc geninfo_unexecuted_blocks=1 00:11:49.450 00:11:49.450 ' 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.450 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:49.451 Cannot find device "nvmf_init_br" 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:49.451 Cannot find device "nvmf_init_br2" 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:49.451 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:49.709 Cannot find device "nvmf_tgt_br" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.709 Cannot find device "nvmf_tgt_br2" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:49.709 Cannot find device "nvmf_init_br" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:49.709 Cannot find device "nvmf_init_br2" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:49.709 Cannot find device "nvmf_tgt_br" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:49.709 Cannot find device "nvmf_tgt_br2" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:49.709 Cannot find device "nvmf_br" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:49.709 Cannot find device "nvmf_init_if" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:49.709 Cannot find device "nvmf_init_if2" 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:49.709 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:49.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:11:49.970 00:11:49.970 --- 10.0.0.3 ping statistics --- 00:11:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.970 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:49.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:49.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:11:49.970 00:11:49.970 --- 10.0.0.4 ping statistics --- 00:11:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.970 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:49.970 00:11:49.970 --- 10.0.0.1 ping statistics --- 00:11:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.970 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:49.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:49.970 00:11:49.970 --- 10.0.0.2 ping statistics --- 00:11:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.970 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # return 0 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:49.970 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=73801 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 73801 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 73801 ']' 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.970 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.970 [2024-10-01 15:24:49.092928] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:11:49.970 [2024-10-01 15:24:49.093049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.227 [2024-10-01 15:24:49.235626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.227 [2024-10-01 15:24:49.323046] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.227 [2024-10-01 15:24:49.323111] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.227 [2024-10-01 15:24:49.323125] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.227 [2024-10-01 15:24:49.323135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.227 [2024-10-01 15:24:49.323143] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.227 [2024-10-01 15:24:49.323493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.227 [2024-10-01 15:24:49.323561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.227 [2024-10-01 15:24:49.323691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.227 [2024-10-01 15:24:49.323700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.485 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:50.485 "poll_groups": [ 00:11:50.485 { 00:11:50.485 "admin_qpairs": 0, 00:11:50.485 "completed_nvme_io": 0, 00:11:50.485 "current_admin_qpairs": 0, 00:11:50.485 "current_io_qpairs": 0, 00:11:50.485 "io_qpairs": 0, 00:11:50.485 "name": "nvmf_tgt_poll_group_000", 00:11:50.485 "pending_bdev_io": 0, 00:11:50.485 "transports": [] 00:11:50.485 }, 00:11:50.485 { 00:11:50.485 "admin_qpairs": 0, 00:11:50.485 "completed_nvme_io": 0, 00:11:50.485 "current_admin_qpairs": 0, 00:11:50.485 "current_io_qpairs": 0, 00:11:50.486 "io_qpairs": 0, 00:11:50.486 "name": "nvmf_tgt_poll_group_001", 00:11:50.486 "pending_bdev_io": 0, 00:11:50.486 "transports": [] 00:11:50.486 }, 00:11:50.486 { 00:11:50.486 "admin_qpairs": 0, 00:11:50.486 "completed_nvme_io": 0, 00:11:50.486 "current_admin_qpairs": 0, 00:11:50.486 "current_io_qpairs": 0, 00:11:50.486 "io_qpairs": 0, 00:11:50.486 "name": "nvmf_tgt_poll_group_002", 00:11:50.486 "pending_bdev_io": 0, 00:11:50.486 "transports": [] 00:11:50.486 }, 00:11:50.486 { 00:11:50.486 "admin_qpairs": 0, 00:11:50.486 "completed_nvme_io": 0, 00:11:50.486 "current_admin_qpairs": 0, 00:11:50.486 "current_io_qpairs": 0, 00:11:50.486 "io_qpairs": 0, 00:11:50.486 "name": "nvmf_tgt_poll_group_003", 00:11:50.486 "pending_bdev_io": 0, 00:11:50.486 "transports": [] 00:11:50.486 } 00:11:50.486 ], 00:11:50.486 "tick_rate": 2200000000 00:11:50.486 }' 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.486 [2024-10-01 15:24:49.625933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.486 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:50.745 "poll_groups": [ 00:11:50.745 { 00:11:50.745 "admin_qpairs": 0, 00:11:50.745 "completed_nvme_io": 0, 00:11:50.745 "current_admin_qpairs": 0, 00:11:50.745 "current_io_qpairs": 0, 00:11:50.745 "io_qpairs": 0, 00:11:50.745 "name": "nvmf_tgt_poll_group_000", 00:11:50.745 "pending_bdev_io": 0, 00:11:50.745 "transports": [ 00:11:50.745 { 00:11:50.745 "trtype": "TCP" 00:11:50.745 } 00:11:50.745 ] 00:11:50.745 }, 00:11:50.745 { 00:11:50.745 "admin_qpairs": 0, 00:11:50.745 "completed_nvme_io": 0, 00:11:50.745 "current_admin_qpairs": 0, 00:11:50.745 "current_io_qpairs": 0, 00:11:50.745 "io_qpairs": 0, 00:11:50.745 "name": "nvmf_tgt_poll_group_001", 00:11:50.745 "pending_bdev_io": 0, 00:11:50.745 "transports": [ 00:11:50.745 { 00:11:50.745 "trtype": "TCP" 00:11:50.745 } 00:11:50.745 ] 00:11:50.745 }, 00:11:50.745 { 00:11:50.745 "admin_qpairs": 0, 00:11:50.745 "completed_nvme_io": 0, 00:11:50.745 "current_admin_qpairs": 0, 00:11:50.745 "current_io_qpairs": 0, 00:11:50.745 "io_qpairs": 0, 00:11:50.745 "name": "nvmf_tgt_poll_group_002", 00:11:50.745 "pending_bdev_io": 0, 00:11:50.745 "transports": [ 00:11:50.745 { 00:11:50.745 "trtype": "TCP" 00:11:50.745 } 00:11:50.745 ] 00:11:50.745 }, 00:11:50.745 { 00:11:50.745 "admin_qpairs": 0, 00:11:50.745 "completed_nvme_io": 0, 00:11:50.745 "current_admin_qpairs": 0, 00:11:50.745 "current_io_qpairs": 0, 00:11:50.745 "io_qpairs": 0, 00:11:50.745 "name": "nvmf_tgt_poll_group_003", 00:11:50.745 "pending_bdev_io": 0, 00:11:50.745 "transports": [ 00:11:50.745 { 00:11:50.745 "trtype": "TCP" 00:11:50.745 } 00:11:50.745 ] 00:11:50.745 } 00:11:50.745 ], 00:11:50.745 "tick_rate": 2200000000 00:11:50.745 }' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 Malloc1 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-10-01 15:24:49.820880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -a 10.0.0.3 -s 4420 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -a 10.0.0.3 -s 4420 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -a 10.0.0.3 -s 4420 00:11:50.745 [2024-10-01 15:24:49.853292] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf' 00:11:50.745 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:50.745 could not add new controller: failed to write to nvme-fabrics device 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.745 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:51.004 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.004 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.004 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.004 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:51.004 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:52.906 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:53.164 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:53.165 [2024-10-01 15:24:52.164416] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf' 00:11:53.165 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:53.165 could not add new controller: failed to write to nvme-fabrics device 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.165 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:53.457 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.457 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.457 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.457 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:53.457 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.357 [2024-10-01 15:24:54.443990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.357 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:55.615 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.615 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.615 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.615 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:55.615 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:57.516 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.774 [2024-10-01 15:24:56.755676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.774 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:00.305 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 [2024-10-01 15:24:59.063082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:00.305 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.206 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.464 [2024-10-01 15:25:01.378950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.464 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.992 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.993 [2024-10-01 15:25:03.678344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid=425da7d6-2e40-4e0d-b2ef-fba0474bdabf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:04.993 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 [2024-10-01 15:25:05.977701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.892 [2024-10-01 15:25:06.025807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.892 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.893 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 [2024-10-01 15:25:06.077789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 [2024-10-01 15:25:06.125841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.151 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 [2024-10-01 15:25:06.173898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:07.152 "poll_groups": [ 00:12:07.152 { 00:12:07.152 "admin_qpairs": 2, 00:12:07.152 "completed_nvme_io": 115, 00:12:07.152 "current_admin_qpairs": 0, 00:12:07.152 "current_io_qpairs": 0, 00:12:07.152 "io_qpairs": 16, 00:12:07.152 "name": "nvmf_tgt_poll_group_000", 00:12:07.152 "pending_bdev_io": 0, 00:12:07.152 "transports": [ 00:12:07.152 { 00:12:07.152 "trtype": "TCP" 00:12:07.152 } 00:12:07.152 ] 00:12:07.152 }, 00:12:07.152 { 00:12:07.152 "admin_qpairs": 3, 00:12:07.152 "completed_nvme_io": 68, 00:12:07.152 "current_admin_qpairs": 0, 00:12:07.152 "current_io_qpairs": 0, 00:12:07.152 "io_qpairs": 17, 00:12:07.152 "name": "nvmf_tgt_poll_group_001", 00:12:07.152 "pending_bdev_io": 0, 00:12:07.152 "transports": [ 00:12:07.152 { 00:12:07.152 "trtype": "TCP" 00:12:07.152 } 00:12:07.152 ] 00:12:07.152 }, 00:12:07.152 { 00:12:07.152 "admin_qpairs": 1, 00:12:07.152 "completed_nvme_io": 118, 00:12:07.152 "current_admin_qpairs": 0, 00:12:07.152 "current_io_qpairs": 0, 00:12:07.152 "io_qpairs": 19, 00:12:07.152 "name": "nvmf_tgt_poll_group_002", 00:12:07.152 "pending_bdev_io": 0, 00:12:07.152 "transports": [ 00:12:07.152 { 00:12:07.152 "trtype": "TCP" 00:12:07.152 } 00:12:07.152 ] 00:12:07.152 }, 00:12:07.152 { 00:12:07.152 "admin_qpairs": 1, 00:12:07.152 "completed_nvme_io": 119, 00:12:07.152 "current_admin_qpairs": 0, 00:12:07.152 "current_io_qpairs": 0, 00:12:07.152 "io_qpairs": 18, 00:12:07.152 "name": "nvmf_tgt_poll_group_003", 00:12:07.152 "pending_bdev_io": 0, 00:12:07.152 "transports": [ 00:12:07.152 { 00:12:07.152 "trtype": "TCP" 00:12:07.152 } 00:12:07.152 ] 00:12:07.152 } 00:12:07.152 ], 00:12:07.152 "tick_rate": 2200000000 00:12:07.152 }' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:07.152 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.411 rmmod nvme_tcp 00:12:07.411 rmmod nvme_fabrics 00:12:07.411 rmmod nvme_keyring 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 73801 ']' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 73801 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 73801 ']' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 73801 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73801 00:12:07.411 killing process with pid 73801 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73801' 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 73801 00:12:07.411 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 73801 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.671 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:12:07.930 00:12:07.930 real 0m18.486s 00:12:07.930 user 1m7.606s 00:12:07.930 sys 0m2.803s 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.930 ************************************ 00:12:07.930 END TEST nvmf_rpc 00:12:07.930 ************************************ 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.930 ************************************ 00:12:07.930 START TEST nvmf_invalid 00:12:07.930 ************************************ 00:12:07.930 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:07.930 * Looking for test storage... 00:12:07.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.930 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:07.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.931 --rc genhtml_branch_coverage=1 00:12:07.931 --rc genhtml_function_coverage=1 00:12:07.931 --rc genhtml_legend=1 00:12:07.931 --rc geninfo_all_blocks=1 00:12:07.931 --rc geninfo_unexecuted_blocks=1 00:12:07.931 00:12:07.931 ' 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:07.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.931 --rc genhtml_branch_coverage=1 00:12:07.931 --rc genhtml_function_coverage=1 00:12:07.931 --rc genhtml_legend=1 00:12:07.931 --rc geninfo_all_blocks=1 00:12:07.931 --rc geninfo_unexecuted_blocks=1 00:12:07.931 00:12:07.931 ' 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:07.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.931 --rc genhtml_branch_coverage=1 00:12:07.931 --rc genhtml_function_coverage=1 00:12:07.931 --rc genhtml_legend=1 00:12:07.931 --rc geninfo_all_blocks=1 00:12:07.931 --rc geninfo_unexecuted_blocks=1 00:12:07.931 00:12:07.931 ' 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:07.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.931 --rc genhtml_branch_coverage=1 00:12:07.931 --rc genhtml_function_coverage=1 00:12:07.931 --rc genhtml_legend=1 00:12:07.931 --rc geninfo_all_blocks=1 00:12:07.931 --rc geninfo_unexecuted_blocks=1 00:12:07.931 00:12:07.931 ' 00:12:07.931 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.190 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.191 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:08.191 Cannot find device "nvmf_init_br" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:08.191 Cannot find device "nvmf_init_br2" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:08.191 Cannot find device "nvmf_tgt_br" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.191 Cannot find device "nvmf_tgt_br2" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:08.191 Cannot find device "nvmf_init_br" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:08.191 Cannot find device "nvmf_init_br2" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:08.191 Cannot find device "nvmf_tgt_br" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:08.191 Cannot find device "nvmf_tgt_br2" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:08.191 Cannot find device "nvmf_br" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:08.191 Cannot find device "nvmf_init_if" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:08.191 Cannot find device "nvmf_init_if2" 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.191 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:08.450 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:08.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:12:08.450 00:12:08.450 --- 10.0.0.3 ping statistics --- 00:12:08.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.451 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:08.451 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:08.451 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:12:08.451 00:12:08.451 --- 10.0.0.4 ping statistics --- 00:12:08.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.451 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:08.451 00:12:08.451 --- 10.0.0.1 ping statistics --- 00:12:08.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.451 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:08.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:08.451 00:12:08.451 --- 10.0.0.2 ping statistics --- 00:12:08.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.451 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # return 0 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=74346 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 74346 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 74346 ']' 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.451 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:08.451 [2024-10-01 15:25:07.557830] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:12:08.451 [2024-10-01 15:25:07.557930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.709 [2024-10-01 15:25:07.705310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.709 [2024-10-01 15:25:07.781139] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.709 [2024-10-01 15:25:07.781194] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.709 [2024-10-01 15:25:07.781206] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.709 [2024-10-01 15:25:07.781215] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.709 [2024-10-01 15:25:07.781223] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.709 [2024-10-01 15:25:07.781554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.709 [2024-10-01 15:25:07.781609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.709 [2024-10-01 15:25:07.782061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.709 [2024-10-01 15:25:07.782095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.644 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.644 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:09.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:09.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:09.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:09.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23494 00:12:09.902 [2024-10-01 15:25:08.982249] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:09.903 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/10/01 15:25:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23494 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:09.903 request: 00:12:09.903 { 00:12:09.903 "method": "nvmf_create_subsystem", 00:12:09.903 "params": { 00:12:09.903 "nqn": "nqn.2016-06.io.spdk:cnode23494", 00:12:09.903 "tgt_name": "foobar" 00:12:09.903 } 00:12:09.903 } 00:12:09.903 Got JSON-RPC error response 00:12:09.903 GoRPCClient: error on JSON-RPC call' 00:12:09.903 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/10/01 15:25:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23494 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:09.903 request: 00:12:09.903 { 00:12:09.903 "method": "nvmf_create_subsystem", 00:12:09.903 "params": { 00:12:09.903 "nqn": "nqn.2016-06.io.spdk:cnode23494", 00:12:09.903 "tgt_name": "foobar" 00:12:09.903 } 00:12:09.903 } 00:12:09.903 Got JSON-RPC error response 00:12:09.903 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:09.903 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:09.903 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3139 00:12:10.161 [2024-10-01 15:25:09.274611] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3139: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:10.161 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/10/01 15:25:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3139 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:10.161 request: 00:12:10.161 { 00:12:10.161 "method": "nvmf_create_subsystem", 00:12:10.161 "params": { 00:12:10.161 "nqn": "nqn.2016-06.io.spdk:cnode3139", 00:12:10.161 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:10.161 } 00:12:10.161 } 00:12:10.161 Got JSON-RPC error response 00:12:10.161 GoRPCClient: error on JSON-RPC call' 00:12:10.161 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/10/01 15:25:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3139 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:10.161 request: 00:12:10.161 { 00:12:10.161 "method": "nvmf_create_subsystem", 00:12:10.161 "params": { 00:12:10.161 "nqn": "nqn.2016-06.io.spdk:cnode3139", 00:12:10.161 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:10.161 } 00:12:10.161 } 00:12:10.161 Got JSON-RPC error response 00:12:10.161 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.161 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:10.161 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14215 00:12:10.421 [2024-10-01 15:25:09.582806] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14215: invalid model number 'SPDK_Controller' 00:12:10.681 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/10/01 15:25:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode14215], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:10.681 request: 00:12:10.681 { 00:12:10.681 "method": "nvmf_create_subsystem", 00:12:10.681 "params": { 00:12:10.681 "nqn": "nqn.2016-06.io.spdk:cnode14215", 00:12:10.681 "model_number": "SPDK_Controller\u001f" 00:12:10.681 } 00:12:10.681 } 00:12:10.681 Got JSON-RPC error response 00:12:10.681 GoRPCClient: error on JSON-RPC call' 00:12:10.681 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/10/01 15:25:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode14215], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:10.681 request: 00:12:10.681 { 00:12:10.681 "method": "nvmf_create_subsystem", 00:12:10.681 "params": { 00:12:10.681 "nqn": "nqn.2016-06.io.spdk:cnode14215", 00:12:10.681 "model_number": "SPDK_Controller\u001f" 00:12:10.681 } 00:12:10.681 } 00:12:10.681 Got JSON-RPC error response 00:12:10.681 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.682 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\_@]1!,U_*$ 8VX]K7]ga' 00:12:10.683 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '\_@]1!,U_*$ 8VX]K7]ga' nqn.2016-06.io.spdk:cnode10581 00:12:10.943 [2024-10-01 15:25:09.963167] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10581: invalid serial number '\_@]1!,U_*$ 8VX]K7]ga' 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/10/01 15:25:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10581 serial_number:\_@]1!,U_*$ 8VX]K7]ga], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN \_@]1!,U_*$ 8VX]K7]ga 00:12:10.943 request: 00:12:10.943 { 00:12:10.943 "method": "nvmf_create_subsystem", 00:12:10.943 "params": { 00:12:10.943 "nqn": "nqn.2016-06.io.spdk:cnode10581", 00:12:10.943 "serial_number": "\\_@]1!,U_*$ 8VX]K7]ga" 00:12:10.943 } 00:12:10.943 } 00:12:10.943 Got JSON-RPC error response 00:12:10.943 GoRPCClient: error on JSON-RPC call' 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/10/01 15:25:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10581 serial_number:\_@]1!,U_*$ 8VX]K7]ga], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN \_@]1!,U_*$ 8VX]K7]ga 00:12:10.943 request: 00:12:10.943 { 00:12:10.943 "method": "nvmf_create_subsystem", 00:12:10.943 "params": { 00:12:10.943 "nqn": "nqn.2016-06.io.spdk:cnode10581", 00:12:10.943 "serial_number": "\\_@]1!,U_*$ 8VX]K7]ga" 00:12:10.943 } 00:12:10.943 } 00:12:10.943 Got JSON-RPC error response 00:12:10.943 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.943 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:10.943 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.944 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:11.203 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'IWe&juG'\'';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V' 00:12:11.204 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'IWe&juG'\'';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V' nqn.2016-06.io.spdk:cnode21080 00:12:11.463 [2024-10-01 15:25:10.455632] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21080: invalid model number 'IWe&juG';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V' 00:12:11.463 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/10/01 15:25:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:IWe&juG'\'';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V nqn:nqn.2016-06.io.spdk:cnode21080], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN IWe&juG'\'';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V 00:12:11.463 request: 00:12:11.463 { 00:12:11.463 "method": "nvmf_create_subsystem", 00:12:11.463 "params": { 00:12:11.463 "nqn": "nqn.2016-06.io.spdk:cnode21080", 00:12:11.463 "model_number": "IWe&juG'\'';\\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V" 00:12:11.463 } 00:12:11.463 } 00:12:11.463 Got JSON-RPC error response 00:12:11.463 GoRPCClient: error on JSON-RPC call' 00:12:11.463 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/10/01 15:25:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:IWe&juG';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V nqn:nqn.2016-06.io.spdk:cnode21080], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN IWe&juG';\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V 00:12:11.463 request: 00:12:11.463 { 00:12:11.463 "method": "nvmf_create_subsystem", 00:12:11.463 "params": { 00:12:11.463 "nqn": "nqn.2016-06.io.spdk:cnode21080", 00:12:11.463 "model_number": "IWe&juG';\\S*r!=LcCX!x.(jxLqfKrw=Kl}53S,/V" 00:12:11.463 } 00:12:11.463 } 00:12:11.463 Got JSON-RPC error response 00:12:11.463 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:11.463 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:11.722 [2024-10-01 15:25:10.756288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.722 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:12.289 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:12.289 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:12.289 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:12.289 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:12.289 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:12.289 [2024-10-01 15:25:11.444879] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:12.548 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/10/01 15:25:11 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:12.548 request: 00:12:12.548 { 00:12:12.548 "method": "nvmf_subsystem_remove_listener", 00:12:12.548 "params": { 00:12:12.548 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:12.548 "listen_address": { 00:12:12.548 "trtype": "tcp", 00:12:12.548 "traddr": "", 00:12:12.548 "trsvcid": "4421" 00:12:12.548 } 00:12:12.548 } 00:12:12.548 } 00:12:12.548 Got JSON-RPC error response 00:12:12.548 GoRPCClient: error on JSON-RPC call' 00:12:12.548 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/10/01 15:25:11 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:12.548 request: 00:12:12.548 { 00:12:12.548 "method": "nvmf_subsystem_remove_listener", 00:12:12.548 "params": { 00:12:12.548 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:12.548 "listen_address": { 00:12:12.548 "trtype": "tcp", 00:12:12.548 "traddr": "", 00:12:12.548 "trsvcid": "4421" 00:12:12.548 } 00:12:12.548 } 00:12:12.548 } 00:12:12.548 Got JSON-RPC error response 00:12:12.548 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:12.548 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30595 -i 0 00:12:12.806 [2024-10-01 15:25:11.809109] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30595: invalid cntlid range [0-65519] 00:12:12.806 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/10/01 15:25:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30595], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:12.806 request: 00:12:12.806 { 00:12:12.806 "method": "nvmf_create_subsystem", 00:12:12.806 "params": { 00:12:12.806 "nqn": "nqn.2016-06.io.spdk:cnode30595", 00:12:12.806 "min_cntlid": 0 00:12:12.806 } 00:12:12.806 } 00:12:12.806 Got JSON-RPC error response 00:12:12.806 GoRPCClient: error on JSON-RPC call' 00:12:12.806 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/10/01 15:25:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30595], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:12.806 request: 00:12:12.806 { 00:12:12.806 "method": "nvmf_create_subsystem", 00:12:12.806 "params": { 00:12:12.806 "nqn": "nqn.2016-06.io.spdk:cnode30595", 00:12:12.806 "min_cntlid": 0 00:12:12.806 } 00:12:12.806 } 00:12:12.806 Got JSON-RPC error response 00:12:12.806 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.806 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7736 -i 65520 00:12:13.064 [2024-10-01 15:25:12.085344] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7736: invalid cntlid range [65520-65519] 00:12:13.064 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/10/01 15:25:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7736], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:13.064 request: 00:12:13.064 { 00:12:13.064 "method": "nvmf_create_subsystem", 00:12:13.064 "params": { 00:12:13.064 "nqn": "nqn.2016-06.io.spdk:cnode7736", 00:12:13.064 "min_cntlid": 65520 00:12:13.064 } 00:12:13.064 } 00:12:13.064 Got JSON-RPC error response 00:12:13.064 GoRPCClient: error on JSON-RPC call' 00:12:13.064 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/10/01 15:25:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7736], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:13.064 request: 00:12:13.064 { 00:12:13.064 "method": "nvmf_create_subsystem", 00:12:13.064 "params": { 00:12:13.064 "nqn": "nqn.2016-06.io.spdk:cnode7736", 00:12:13.064 "min_cntlid": 65520 00:12:13.064 } 00:12:13.064 } 00:12:13.064 Got JSON-RPC error response 00:12:13.064 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.064 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14850 -I 0 00:12:13.322 [2024-10-01 15:25:12.441721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14850: invalid cntlid range [1-0] 00:12:13.322 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/10/01 15:25:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:13.322 request: 00:12:13.322 { 00:12:13.322 "method": "nvmf_create_subsystem", 00:12:13.322 "params": { 00:12:13.322 "nqn": "nqn.2016-06.io.spdk:cnode14850", 00:12:13.322 "max_cntlid": 0 00:12:13.322 } 00:12:13.322 } 00:12:13.322 Got JSON-RPC error response 00:12:13.322 GoRPCClient: error on JSON-RPC call' 00:12:13.322 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/10/01 15:25:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:13.322 request: 00:12:13.322 { 00:12:13.322 "method": "nvmf_create_subsystem", 00:12:13.322 "params": { 00:12:13.322 "nqn": "nqn.2016-06.io.spdk:cnode14850", 00:12:13.322 "max_cntlid": 0 00:12:13.322 } 00:12:13.322 } 00:12:13.322 Got JSON-RPC error response 00:12:13.322 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.322 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21250 -I 65520 00:12:13.886 [2024-10-01 15:25:12.750216] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21250: invalid cntlid range [1-65520] 00:12:13.886 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/10/01 15:25:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21250], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:13.886 request: 00:12:13.886 { 00:12:13.886 "method": "nvmf_create_subsystem", 00:12:13.886 "params": { 00:12:13.886 "nqn": "nqn.2016-06.io.spdk:cnode21250", 00:12:13.886 "max_cntlid": 65520 00:12:13.886 } 00:12:13.886 } 00:12:13.886 Got JSON-RPC error response 00:12:13.886 GoRPCClient: error on JSON-RPC call' 00:12:13.886 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/10/01 15:25:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21250], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:13.886 request: 00:12:13.886 { 00:12:13.886 "method": "nvmf_create_subsystem", 00:12:13.886 "params": { 00:12:13.886 "nqn": "nqn.2016-06.io.spdk:cnode21250", 00:12:13.886 "max_cntlid": 65520 00:12:13.886 } 00:12:13.886 } 00:12:13.886 Got JSON-RPC error response 00:12:13.886 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.886 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15383 -i 6 -I 5 00:12:13.886 [2024-10-01 15:25:13.014469] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15383: invalid cntlid range [6-5] 00:12:13.886 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/10/01 15:25:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15383], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:13.886 request: 00:12:13.886 { 00:12:13.886 "method": "nvmf_create_subsystem", 00:12:13.886 "params": { 00:12:13.886 "nqn": "nqn.2016-06.io.spdk:cnode15383", 00:12:13.886 "min_cntlid": 6, 00:12:13.886 "max_cntlid": 5 00:12:13.886 } 00:12:13.886 } 00:12:13.886 Got JSON-RPC error response 00:12:13.886 GoRPCClient: error on JSON-RPC call' 00:12:13.886 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/10/01 15:25:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15383], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:13.886 request: 00:12:13.886 { 00:12:13.886 "method": "nvmf_create_subsystem", 00:12:13.886 "params": { 00:12:13.886 "nqn": "nqn.2016-06.io.spdk:cnode15383", 00:12:13.886 "min_cntlid": 6, 00:12:13.886 "max_cntlid": 5 00:12:13.886 } 00:12:13.886 } 00:12:13.886 Got JSON-RPC error response 00:12:13.886 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.886 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:14.146 { 00:12:14.146 "name": "foobar", 00:12:14.146 "method": "nvmf_delete_target", 00:12:14.146 "req_id": 1 00:12:14.146 } 00:12:14.146 Got JSON-RPC error response 00:12:14.146 response: 00:12:14.146 { 00:12:14.146 "code": -32602, 00:12:14.146 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:14.146 }' 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:14.146 { 00:12:14.146 "name": "foobar", 00:12:14.146 "method": "nvmf_delete_target", 00:12:14.146 "req_id": 1 00:12:14.146 } 00:12:14.146 Got JSON-RPC error response 00:12:14.146 response: 00:12:14.146 { 00:12:14.146 "code": -32602, 00:12:14.146 "message": "The specified target doesn't exist, cannot delete it." 00:12:14.146 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.146 rmmod nvme_tcp 00:12:14.146 rmmod nvme_fabrics 00:12:14.146 rmmod nvme_keyring 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 74346 ']' 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 74346 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 74346 ']' 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 74346 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74346 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.146 killing process with pid 74346 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74346' 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 74346 00:12:14.146 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 74346 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:14.404 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:14.662 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:12:14.663 00:12:14.663 real 0m6.791s 00:12:14.663 user 0m26.768s 00:12:14.663 sys 0m1.348s 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:14.663 ************************************ 00:12:14.663 END TEST nvmf_invalid 00:12:14.663 ************************************ 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.663 ************************************ 00:12:14.663 START TEST nvmf_connect_stress 00:12:14.663 ************************************ 00:12:14.663 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.921 * Looking for test storage... 00:12:14.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:14.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.921 --rc genhtml_branch_coverage=1 00:12:14.921 --rc genhtml_function_coverage=1 00:12:14.921 --rc genhtml_legend=1 00:12:14.921 --rc geninfo_all_blocks=1 00:12:14.921 --rc geninfo_unexecuted_blocks=1 00:12:14.921 00:12:14.921 ' 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:14.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.921 --rc genhtml_branch_coverage=1 00:12:14.921 --rc genhtml_function_coverage=1 00:12:14.921 --rc genhtml_legend=1 00:12:14.921 --rc geninfo_all_blocks=1 00:12:14.921 --rc geninfo_unexecuted_blocks=1 00:12:14.921 00:12:14.921 ' 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:14.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.921 --rc genhtml_branch_coverage=1 00:12:14.921 --rc genhtml_function_coverage=1 00:12:14.921 --rc genhtml_legend=1 00:12:14.921 --rc geninfo_all_blocks=1 00:12:14.921 --rc geninfo_unexecuted_blocks=1 00:12:14.921 00:12:14.921 ' 00:12:14.921 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:14.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.921 --rc genhtml_branch_coverage=1 00:12:14.921 --rc genhtml_function_coverage=1 00:12:14.921 --rc genhtml_legend=1 00:12:14.922 --rc geninfo_all_blocks=1 00:12:14.922 --rc geninfo_unexecuted_blocks=1 00:12:14.922 00:12:14.922 ' 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.922 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.922 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.922 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:14.923 Cannot find device "nvmf_init_br" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:14.923 Cannot find device "nvmf_init_br2" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:14.923 Cannot find device "nvmf_tgt_br" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.923 Cannot find device "nvmf_tgt_br2" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:14.923 Cannot find device "nvmf_init_br" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:14.923 Cannot find device "nvmf_init_br2" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:14.923 Cannot find device "nvmf_tgt_br" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:14.923 Cannot find device "nvmf_tgt_br2" 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:12:14.923 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:15.181 Cannot find device "nvmf_br" 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:15.181 Cannot find device "nvmf_init_if" 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:15.181 Cannot find device "nvmf_init_if2" 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.181 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:15.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:15.439 00:12:15.439 --- 10.0.0.3 ping statistics --- 00:12:15.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.439 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:15.439 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:15.439 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:15.439 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:15.439 00:12:15.439 --- 10.0.0.4 ping statistics --- 00:12:15.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.439 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:15.440 00:12:15.440 --- 10.0.0.1 ping statistics --- 00:12:15.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.440 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:15.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:15.440 00:12:15.440 --- 10.0.0.2 ping statistics --- 00:12:15.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.440 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # return 0 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=74916 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 74916 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 74916 ']' 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.440 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.440 [2024-10-01 15:25:14.493768] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:12:15.440 [2024-10-01 15:25:14.493900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.698 [2024-10-01 15:25:14.636724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.698 [2024-10-01 15:25:14.707537] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.698 [2024-10-01 15:25:14.707602] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.698 [2024-10-01 15:25:14.707623] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.698 [2024-10-01 15:25:14.707639] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.698 [2024-10-01 15:25:14.707652] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.698 [2024-10-01 15:25:14.707795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.698 [2024-10-01 15:25:14.708127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.698 [2024-10-01 15:25:14.708154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.698 [2024-10-01 15:25:14.833758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.698 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.698 [2024-10-01 15:25:14.866054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.956 NULL1 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74951 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:15.956 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:16.216 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.216 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.475 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.475 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:16.475 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.475 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.475 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.733 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:16.733 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.733 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.733 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.298 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.298 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:17.298 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.298 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.298 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.555 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.555 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:17.555 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.555 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.555 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.812 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.812 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:17.812 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.812 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.812 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.069 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.069 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:18.069 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.069 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.069 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.634 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.634 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:18.634 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.634 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.634 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.892 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.892 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:18.892 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.892 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.892 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.150 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.150 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:19.150 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.150 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.150 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.407 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.407 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:19.407 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.407 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.407 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.665 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.665 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:19.665 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.665 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.665 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.230 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.230 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:20.230 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.230 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.230 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.489 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.489 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:20.489 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.489 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.489 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.747 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.747 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:20.747 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.747 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.747 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.004 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.004 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:21.004 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.004 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.004 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.261 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.261 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:21.261 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.261 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.261 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.827 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.827 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:21.827 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.827 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.827 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.085 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.085 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:22.085 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.085 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.085 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.342 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.342 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:22.342 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.342 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.342 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.600 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.600 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:22.600 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.600 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.600 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.858 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.858 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:22.858 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.858 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.858 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.451 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.451 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:23.451 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.451 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.451 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.709 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.709 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:23.709 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.709 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.709 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.968 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:23.968 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.968 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.968 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.225 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.226 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:24.226 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.226 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.226 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.482 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.482 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:24.482 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.482 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.482 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.048 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.048 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:25.048 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.048 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.048 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.307 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.307 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:25.307 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.307 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.307 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.565 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.565 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:25.565 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.565 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.565 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.823 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.823 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:25.823 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.823 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.823 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.081 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74951 00:12:26.081 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74951) - No such process 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74951 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:26.081 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.340 rmmod nvme_tcp 00:12:26.340 rmmod nvme_fabrics 00:12:26.340 rmmod nvme_keyring 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 74916 ']' 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 74916 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 74916 ']' 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 74916 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74916 00:12:26.340 killing process with pid 74916 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74916' 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 74916 00:12:26.340 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 74916 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:12:26.599 00:12:26.599 real 0m11.990s 00:12:26.599 user 0m38.775s 00:12:26.599 sys 0m3.339s 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.599 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.599 ************************************ 00:12:26.599 END TEST nvmf_connect_stress 00:12:26.599 ************************************ 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.858 ************************************ 00:12:26.858 START TEST nvmf_fused_ordering 00:12:26.858 ************************************ 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:26.858 * Looking for test storage... 00:12:26.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.858 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:26.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.859 --rc genhtml_branch_coverage=1 00:12:26.859 --rc genhtml_function_coverage=1 00:12:26.859 --rc genhtml_legend=1 00:12:26.859 --rc geninfo_all_blocks=1 00:12:26.859 --rc geninfo_unexecuted_blocks=1 00:12:26.859 00:12:26.859 ' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:26.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.859 --rc genhtml_branch_coverage=1 00:12:26.859 --rc genhtml_function_coverage=1 00:12:26.859 --rc genhtml_legend=1 00:12:26.859 --rc geninfo_all_blocks=1 00:12:26.859 --rc geninfo_unexecuted_blocks=1 00:12:26.859 00:12:26.859 ' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:26.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.859 --rc genhtml_branch_coverage=1 00:12:26.859 --rc genhtml_function_coverage=1 00:12:26.859 --rc genhtml_legend=1 00:12:26.859 --rc geninfo_all_blocks=1 00:12:26.859 --rc geninfo_unexecuted_blocks=1 00:12:26.859 00:12:26.859 ' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:26.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.859 --rc genhtml_branch_coverage=1 00:12:26.859 --rc genhtml_function_coverage=1 00:12:26.859 --rc genhtml_legend=1 00:12:26.859 --rc geninfo_all_blocks=1 00:12:26.859 --rc geninfo_unexecuted_blocks=1 00:12:26.859 00:12:26.859 ' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:26.859 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.859 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:26.860 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:27.118 Cannot find device "nvmf_init_br" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:27.118 Cannot find device "nvmf_init_br2" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:27.118 Cannot find device "nvmf_tgt_br" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.118 Cannot find device "nvmf_tgt_br2" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:27.118 Cannot find device "nvmf_init_br" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:27.118 Cannot find device "nvmf_init_br2" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:27.118 Cannot find device "nvmf_tgt_br" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:27.118 Cannot find device "nvmf_tgt_br2" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:27.118 Cannot find device "nvmf_br" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:27.118 Cannot find device "nvmf_init_if" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:27.118 Cannot find device "nvmf_init_if2" 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.118 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.119 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:27.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:12:27.379 00:12:27.379 --- 10.0.0.3 ping statistics --- 00:12:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.379 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:27.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:27.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:12:27.379 00:12:27.379 --- 10.0.0.4 ping statistics --- 00:12:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.379 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:12:27.379 00:12:27.379 --- 10.0.0.1 ping statistics --- 00:12:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.379 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:27.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:12:27.379 00:12:27.379 --- 10.0.0.2 ping statistics --- 00:12:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.379 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # return 0 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=75330 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 75330 00:12:27.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 75330 ']' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.379 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.379 [2024-10-01 15:25:26.480945] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:12:27.379 [2024-10-01 15:25:26.481754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.638 [2024-10-01 15:25:26.624557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.638 [2024-10-01 15:25:26.708311] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.638 [2024-10-01 15:25:26.708368] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.638 [2024-10-01 15:25:26.708381] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.638 [2024-10-01 15:25:26.708389] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.638 [2024-10-01 15:25:26.708397] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.638 [2024-10-01 15:25:26.708444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 [2024-10-01 15:25:27.609244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 [2024-10-01 15:25:27.625355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 NULL1 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.614 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:28.614 [2024-10-01 15:25:27.680440] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:12:28.614 [2024-10-01 15:25:27.680703] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75385 ] 00:12:29.181 Attached to nqn.2016-06.io.spdk:cnode1 00:12:29.181 Namespace ID: 1 size: 1GB 00:12:29.181 fused_ordering(0) 00:12:29.181 fused_ordering(1) 00:12:29.181 fused_ordering(2) 00:12:29.181 fused_ordering(3) 00:12:29.181 fused_ordering(4) 00:12:29.181 fused_ordering(5) 00:12:29.181 fused_ordering(6) 00:12:29.181 fused_ordering(7) 00:12:29.181 fused_ordering(8) 00:12:29.181 fused_ordering(9) 00:12:29.181 fused_ordering(10) 00:12:29.181 fused_ordering(11) 00:12:29.181 fused_ordering(12) 00:12:29.181 fused_ordering(13) 00:12:29.181 fused_ordering(14) 00:12:29.181 fused_ordering(15) 00:12:29.181 fused_ordering(16) 00:12:29.181 fused_ordering(17) 00:12:29.181 fused_ordering(18) 00:12:29.181 fused_ordering(19) 00:12:29.181 fused_ordering(20) 00:12:29.181 fused_ordering(21) 00:12:29.181 fused_ordering(22) 00:12:29.181 fused_ordering(23) 00:12:29.181 fused_ordering(24) 00:12:29.181 fused_ordering(25) 00:12:29.181 fused_ordering(26) 00:12:29.181 fused_ordering(27) 00:12:29.181 fused_ordering(28) 00:12:29.181 fused_ordering(29) 00:12:29.181 fused_ordering(30) 00:12:29.181 fused_ordering(31) 00:12:29.181 fused_ordering(32) 00:12:29.181 fused_ordering(33) 00:12:29.181 fused_ordering(34) 00:12:29.181 fused_ordering(35) 00:12:29.181 fused_ordering(36) 00:12:29.181 fused_ordering(37) 00:12:29.181 fused_ordering(38) 00:12:29.181 fused_ordering(39) 00:12:29.181 fused_ordering(40) 00:12:29.181 fused_ordering(41) 00:12:29.181 fused_ordering(42) 00:12:29.181 fused_ordering(43) 00:12:29.181 fused_ordering(44) 00:12:29.181 fused_ordering(45) 00:12:29.181 fused_ordering(46) 00:12:29.181 fused_ordering(47) 00:12:29.181 fused_ordering(48) 00:12:29.181 fused_ordering(49) 00:12:29.181 fused_ordering(50) 00:12:29.181 fused_ordering(51) 00:12:29.181 fused_ordering(52) 00:12:29.181 fused_ordering(53) 00:12:29.181 fused_ordering(54) 00:12:29.181 fused_ordering(55) 00:12:29.181 fused_ordering(56) 00:12:29.181 fused_ordering(57) 00:12:29.181 fused_ordering(58) 00:12:29.181 fused_ordering(59) 00:12:29.181 fused_ordering(60) 00:12:29.181 fused_ordering(61) 00:12:29.181 fused_ordering(62) 00:12:29.181 fused_ordering(63) 00:12:29.181 fused_ordering(64) 00:12:29.181 fused_ordering(65) 00:12:29.181 fused_ordering(66) 00:12:29.181 fused_ordering(67) 00:12:29.181 fused_ordering(68) 00:12:29.181 fused_ordering(69) 00:12:29.181 fused_ordering(70) 00:12:29.181 fused_ordering(71) 00:12:29.181 fused_ordering(72) 00:12:29.181 fused_ordering(73) 00:12:29.181 fused_ordering(74) 00:12:29.181 fused_ordering(75) 00:12:29.181 fused_ordering(76) 00:12:29.181 fused_ordering(77) 00:12:29.181 fused_ordering(78) 00:12:29.181 fused_ordering(79) 00:12:29.181 fused_ordering(80) 00:12:29.181 fused_ordering(81) 00:12:29.181 fused_ordering(82) 00:12:29.181 fused_ordering(83) 00:12:29.181 fused_ordering(84) 00:12:29.181 fused_ordering(85) 00:12:29.181 fused_ordering(86) 00:12:29.181 fused_ordering(87) 00:12:29.181 fused_ordering(88) 00:12:29.181 fused_ordering(89) 00:12:29.181 fused_ordering(90) 00:12:29.181 fused_ordering(91) 00:12:29.181 fused_ordering(92) 00:12:29.181 fused_ordering(93) 00:12:29.181 fused_ordering(94) 00:12:29.181 fused_ordering(95) 00:12:29.181 fused_ordering(96) 00:12:29.181 fused_ordering(97) 00:12:29.181 fused_ordering(98) 00:12:29.181 fused_ordering(99) 00:12:29.181 fused_ordering(100) 00:12:29.181 fused_ordering(101) 00:12:29.181 fused_ordering(102) 00:12:29.181 fused_ordering(103) 00:12:29.181 fused_ordering(104) 00:12:29.181 fused_ordering(105) 00:12:29.181 fused_ordering(106) 00:12:29.181 fused_ordering(107) 00:12:29.181 fused_ordering(108) 00:12:29.181 fused_ordering(109) 00:12:29.181 fused_ordering(110) 00:12:29.181 fused_ordering(111) 00:12:29.181 fused_ordering(112) 00:12:29.181 fused_ordering(113) 00:12:29.181 fused_ordering(114) 00:12:29.181 fused_ordering(115) 00:12:29.181 fused_ordering(116) 00:12:29.181 fused_ordering(117) 00:12:29.181 fused_ordering(118) 00:12:29.181 fused_ordering(119) 00:12:29.181 fused_ordering(120) 00:12:29.181 fused_ordering(121) 00:12:29.181 fused_ordering(122) 00:12:29.181 fused_ordering(123) 00:12:29.181 fused_ordering(124) 00:12:29.181 fused_ordering(125) 00:12:29.181 fused_ordering(126) 00:12:29.181 fused_ordering(127) 00:12:29.181 fused_ordering(128) 00:12:29.181 fused_ordering(129) 00:12:29.181 fused_ordering(130) 00:12:29.181 fused_ordering(131) 00:12:29.181 fused_ordering(132) 00:12:29.181 fused_ordering(133) 00:12:29.181 fused_ordering(134) 00:12:29.181 fused_ordering(135) 00:12:29.181 fused_ordering(136) 00:12:29.181 fused_ordering(137) 00:12:29.181 fused_ordering(138) 00:12:29.181 fused_ordering(139) 00:12:29.181 fused_ordering(140) 00:12:29.181 fused_ordering(141) 00:12:29.181 fused_ordering(142) 00:12:29.181 fused_ordering(143) 00:12:29.181 fused_ordering(144) 00:12:29.181 fused_ordering(145) 00:12:29.181 fused_ordering(146) 00:12:29.181 fused_ordering(147) 00:12:29.181 fused_ordering(148) 00:12:29.181 fused_ordering(149) 00:12:29.181 fused_ordering(150) 00:12:29.181 fused_ordering(151) 00:12:29.181 fused_ordering(152) 00:12:29.181 fused_ordering(153) 00:12:29.181 fused_ordering(154) 00:12:29.181 fused_ordering(155) 00:12:29.181 fused_ordering(156) 00:12:29.181 fused_ordering(157) 00:12:29.181 fused_ordering(158) 00:12:29.181 fused_ordering(159) 00:12:29.181 fused_ordering(160) 00:12:29.181 fused_ordering(161) 00:12:29.181 fused_ordering(162) 00:12:29.181 fused_ordering(163) 00:12:29.181 fused_ordering(164) 00:12:29.181 fused_ordering(165) 00:12:29.181 fused_ordering(166) 00:12:29.181 fused_ordering(167) 00:12:29.181 fused_ordering(168) 00:12:29.181 fused_ordering(169) 00:12:29.181 fused_ordering(170) 00:12:29.181 fused_ordering(171) 00:12:29.181 fused_ordering(172) 00:12:29.181 fused_ordering(173) 00:12:29.182 fused_ordering(174) 00:12:29.182 fused_ordering(175) 00:12:29.182 fused_ordering(176) 00:12:29.182 fused_ordering(177) 00:12:29.182 fused_ordering(178) 00:12:29.182 fused_ordering(179) 00:12:29.182 fused_ordering(180) 00:12:29.182 fused_ordering(181) 00:12:29.182 fused_ordering(182) 00:12:29.182 fused_ordering(183) 00:12:29.182 fused_ordering(184) 00:12:29.182 fused_ordering(185) 00:12:29.182 fused_ordering(186) 00:12:29.182 fused_ordering(187) 00:12:29.182 fused_ordering(188) 00:12:29.182 fused_ordering(189) 00:12:29.182 fused_ordering(190) 00:12:29.182 fused_ordering(191) 00:12:29.182 fused_ordering(192) 00:12:29.182 fused_ordering(193) 00:12:29.182 fused_ordering(194) 00:12:29.182 fused_ordering(195) 00:12:29.182 fused_ordering(196) 00:12:29.182 fused_ordering(197) 00:12:29.182 fused_ordering(198) 00:12:29.182 fused_ordering(199) 00:12:29.182 fused_ordering(200) 00:12:29.182 fused_ordering(201) 00:12:29.182 fused_ordering(202) 00:12:29.182 fused_ordering(203) 00:12:29.182 fused_ordering(204) 00:12:29.182 fused_ordering(205) 00:12:29.441 fused_ordering(206) 00:12:29.441 fused_ordering(207) 00:12:29.441 fused_ordering(208) 00:12:29.441 fused_ordering(209) 00:12:29.441 fused_ordering(210) 00:12:29.441 fused_ordering(211) 00:12:29.441 fused_ordering(212) 00:12:29.441 fused_ordering(213) 00:12:29.441 fused_ordering(214) 00:12:29.441 fused_ordering(215) 00:12:29.441 fused_ordering(216) 00:12:29.441 fused_ordering(217) 00:12:29.441 fused_ordering(218) 00:12:29.441 fused_ordering(219) 00:12:29.441 fused_ordering(220) 00:12:29.441 fused_ordering(221) 00:12:29.441 fused_ordering(222) 00:12:29.441 fused_ordering(223) 00:12:29.441 fused_ordering(224) 00:12:29.441 fused_ordering(225) 00:12:29.441 fused_ordering(226) 00:12:29.441 fused_ordering(227) 00:12:29.441 fused_ordering(228) 00:12:29.441 fused_ordering(229) 00:12:29.441 fused_ordering(230) 00:12:29.441 fused_ordering(231) 00:12:29.441 fused_ordering(232) 00:12:29.441 fused_ordering(233) 00:12:29.441 fused_ordering(234) 00:12:29.441 fused_ordering(235) 00:12:29.441 fused_ordering(236) 00:12:29.441 fused_ordering(237) 00:12:29.441 fused_ordering(238) 00:12:29.441 fused_ordering(239) 00:12:29.441 fused_ordering(240) 00:12:29.441 fused_ordering(241) 00:12:29.441 fused_ordering(242) 00:12:29.441 fused_ordering(243) 00:12:29.441 fused_ordering(244) 00:12:29.441 fused_ordering(245) 00:12:29.441 fused_ordering(246) 00:12:29.441 fused_ordering(247) 00:12:29.441 fused_ordering(248) 00:12:29.441 fused_ordering(249) 00:12:29.441 fused_ordering(250) 00:12:29.441 fused_ordering(251) 00:12:29.441 fused_ordering(252) 00:12:29.441 fused_ordering(253) 00:12:29.441 fused_ordering(254) 00:12:29.441 fused_ordering(255) 00:12:29.441 fused_ordering(256) 00:12:29.441 fused_ordering(257) 00:12:29.441 fused_ordering(258) 00:12:29.441 fused_ordering(259) 00:12:29.441 fused_ordering(260) 00:12:29.441 fused_ordering(261) 00:12:29.441 fused_ordering(262) 00:12:29.441 fused_ordering(263) 00:12:29.441 fused_ordering(264) 00:12:29.441 fused_ordering(265) 00:12:29.441 fused_ordering(266) 00:12:29.441 fused_ordering(267) 00:12:29.441 fused_ordering(268) 00:12:29.441 fused_ordering(269) 00:12:29.441 fused_ordering(270) 00:12:29.441 fused_ordering(271) 00:12:29.441 fused_ordering(272) 00:12:29.441 fused_ordering(273) 00:12:29.441 fused_ordering(274) 00:12:29.441 fused_ordering(275) 00:12:29.441 fused_ordering(276) 00:12:29.441 fused_ordering(277) 00:12:29.441 fused_ordering(278) 00:12:29.441 fused_ordering(279) 00:12:29.441 fused_ordering(280) 00:12:29.441 fused_ordering(281) 00:12:29.441 fused_ordering(282) 00:12:29.441 fused_ordering(283) 00:12:29.441 fused_ordering(284) 00:12:29.441 fused_ordering(285) 00:12:29.441 fused_ordering(286) 00:12:29.441 fused_ordering(287) 00:12:29.441 fused_ordering(288) 00:12:29.441 fused_ordering(289) 00:12:29.441 fused_ordering(290) 00:12:29.441 fused_ordering(291) 00:12:29.441 fused_ordering(292) 00:12:29.441 fused_ordering(293) 00:12:29.441 fused_ordering(294) 00:12:29.441 fused_ordering(295) 00:12:29.441 fused_ordering(296) 00:12:29.441 fused_ordering(297) 00:12:29.441 fused_ordering(298) 00:12:29.441 fused_ordering(299) 00:12:29.441 fused_ordering(300) 00:12:29.441 fused_ordering(301) 00:12:29.441 fused_ordering(302) 00:12:29.441 fused_ordering(303) 00:12:29.441 fused_ordering(304) 00:12:29.441 fused_ordering(305) 00:12:29.441 fused_ordering(306) 00:12:29.441 fused_ordering(307) 00:12:29.441 fused_ordering(308) 00:12:29.441 fused_ordering(309) 00:12:29.441 fused_ordering(310) 00:12:29.441 fused_ordering(311) 00:12:29.441 fused_ordering(312) 00:12:29.441 fused_ordering(313) 00:12:29.441 fused_ordering(314) 00:12:29.441 fused_ordering(315) 00:12:29.441 fused_ordering(316) 00:12:29.441 fused_ordering(317) 00:12:29.441 fused_ordering(318) 00:12:29.441 fused_ordering(319) 00:12:29.441 fused_ordering(320) 00:12:29.441 fused_ordering(321) 00:12:29.441 fused_ordering(322) 00:12:29.441 fused_ordering(323) 00:12:29.441 fused_ordering(324) 00:12:29.441 fused_ordering(325) 00:12:29.441 fused_ordering(326) 00:12:29.441 fused_ordering(327) 00:12:29.441 fused_ordering(328) 00:12:29.441 fused_ordering(329) 00:12:29.441 fused_ordering(330) 00:12:29.441 fused_ordering(331) 00:12:29.441 fused_ordering(332) 00:12:29.441 fused_ordering(333) 00:12:29.441 fused_ordering(334) 00:12:29.441 fused_ordering(335) 00:12:29.441 fused_ordering(336) 00:12:29.441 fused_ordering(337) 00:12:29.441 fused_ordering(338) 00:12:29.441 fused_ordering(339) 00:12:29.441 fused_ordering(340) 00:12:29.441 fused_ordering(341) 00:12:29.441 fused_ordering(342) 00:12:29.441 fused_ordering(343) 00:12:29.441 fused_ordering(344) 00:12:29.441 fused_ordering(345) 00:12:29.441 fused_ordering(346) 00:12:29.441 fused_ordering(347) 00:12:29.441 fused_ordering(348) 00:12:29.441 fused_ordering(349) 00:12:29.441 fused_ordering(350) 00:12:29.441 fused_ordering(351) 00:12:29.441 fused_ordering(352) 00:12:29.441 fused_ordering(353) 00:12:29.441 fused_ordering(354) 00:12:29.441 fused_ordering(355) 00:12:29.441 fused_ordering(356) 00:12:29.441 fused_ordering(357) 00:12:29.441 fused_ordering(358) 00:12:29.441 fused_ordering(359) 00:12:29.441 fused_ordering(360) 00:12:29.441 fused_ordering(361) 00:12:29.441 fused_ordering(362) 00:12:29.442 fused_ordering(363) 00:12:29.442 fused_ordering(364) 00:12:29.442 fused_ordering(365) 00:12:29.442 fused_ordering(366) 00:12:29.442 fused_ordering(367) 00:12:29.442 fused_ordering(368) 00:12:29.442 fused_ordering(369) 00:12:29.442 fused_ordering(370) 00:12:29.442 fused_ordering(371) 00:12:29.442 fused_ordering(372) 00:12:29.442 fused_ordering(373) 00:12:29.442 fused_ordering(374) 00:12:29.442 fused_ordering(375) 00:12:29.442 fused_ordering(376) 00:12:29.442 fused_ordering(377) 00:12:29.442 fused_ordering(378) 00:12:29.442 fused_ordering(379) 00:12:29.442 fused_ordering(380) 00:12:29.442 fused_ordering(381) 00:12:29.442 fused_ordering(382) 00:12:29.442 fused_ordering(383) 00:12:29.442 fused_ordering(384) 00:12:29.442 fused_ordering(385) 00:12:29.442 fused_ordering(386) 00:12:29.442 fused_ordering(387) 00:12:29.442 fused_ordering(388) 00:12:29.442 fused_ordering(389) 00:12:29.442 fused_ordering(390) 00:12:29.442 fused_ordering(391) 00:12:29.442 fused_ordering(392) 00:12:29.442 fused_ordering(393) 00:12:29.442 fused_ordering(394) 00:12:29.442 fused_ordering(395) 00:12:29.442 fused_ordering(396) 00:12:29.442 fused_ordering(397) 00:12:29.442 fused_ordering(398) 00:12:29.442 fused_ordering(399) 00:12:29.442 fused_ordering(400) 00:12:29.442 fused_ordering(401) 00:12:29.442 fused_ordering(402) 00:12:29.442 fused_ordering(403) 00:12:29.442 fused_ordering(404) 00:12:29.442 fused_ordering(405) 00:12:29.442 fused_ordering(406) 00:12:29.442 fused_ordering(407) 00:12:29.442 fused_ordering(408) 00:12:29.442 fused_ordering(409) 00:12:29.442 fused_ordering(410) 00:12:29.700 fused_ordering(411) 00:12:29.700 fused_ordering(412) 00:12:29.700 fused_ordering(413) 00:12:29.700 fused_ordering(414) 00:12:29.700 fused_ordering(415) 00:12:29.700 fused_ordering(416) 00:12:29.700 fused_ordering(417) 00:12:29.700 fused_ordering(418) 00:12:29.700 fused_ordering(419) 00:12:29.700 fused_ordering(420) 00:12:29.700 fused_ordering(421) 00:12:29.700 fused_ordering(422) 00:12:29.700 fused_ordering(423) 00:12:29.700 fused_ordering(424) 00:12:29.700 fused_ordering(425) 00:12:29.700 fused_ordering(426) 00:12:29.700 fused_ordering(427) 00:12:29.700 fused_ordering(428) 00:12:29.700 fused_ordering(429) 00:12:29.700 fused_ordering(430) 00:12:29.700 fused_ordering(431) 00:12:29.700 fused_ordering(432) 00:12:29.700 fused_ordering(433) 00:12:29.700 fused_ordering(434) 00:12:29.700 fused_ordering(435) 00:12:29.700 fused_ordering(436) 00:12:29.700 fused_ordering(437) 00:12:29.700 fused_ordering(438) 00:12:29.700 fused_ordering(439) 00:12:29.700 fused_ordering(440) 00:12:29.700 fused_ordering(441) 00:12:29.700 fused_ordering(442) 00:12:29.700 fused_ordering(443) 00:12:29.700 fused_ordering(444) 00:12:29.700 fused_ordering(445) 00:12:29.700 fused_ordering(446) 00:12:29.700 fused_ordering(447) 00:12:29.700 fused_ordering(448) 00:12:29.700 fused_ordering(449) 00:12:29.700 fused_ordering(450) 00:12:29.700 fused_ordering(451) 00:12:29.700 fused_ordering(452) 00:12:29.700 fused_ordering(453) 00:12:29.700 fused_ordering(454) 00:12:29.700 fused_ordering(455) 00:12:29.700 fused_ordering(456) 00:12:29.700 fused_ordering(457) 00:12:29.700 fused_ordering(458) 00:12:29.700 fused_ordering(459) 00:12:29.700 fused_ordering(460) 00:12:29.700 fused_ordering(461) 00:12:29.700 fused_ordering(462) 00:12:29.700 fused_ordering(463) 00:12:29.700 fused_ordering(464) 00:12:29.700 fused_ordering(465) 00:12:29.700 fused_ordering(466) 00:12:29.700 fused_ordering(467) 00:12:29.700 fused_ordering(468) 00:12:29.700 fused_ordering(469) 00:12:29.700 fused_ordering(470) 00:12:29.700 fused_ordering(471) 00:12:29.700 fused_ordering(472) 00:12:29.700 fused_ordering(473) 00:12:29.700 fused_ordering(474) 00:12:29.700 fused_ordering(475) 00:12:29.700 fused_ordering(476) 00:12:29.700 fused_ordering(477) 00:12:29.700 fused_ordering(478) 00:12:29.700 fused_ordering(479) 00:12:29.700 fused_ordering(480) 00:12:29.700 fused_ordering(481) 00:12:29.700 fused_ordering(482) 00:12:29.700 fused_ordering(483) 00:12:29.700 fused_ordering(484) 00:12:29.700 fused_ordering(485) 00:12:29.700 fused_ordering(486) 00:12:29.700 fused_ordering(487) 00:12:29.700 fused_ordering(488) 00:12:29.700 fused_ordering(489) 00:12:29.700 fused_ordering(490) 00:12:29.700 fused_ordering(491) 00:12:29.700 fused_ordering(492) 00:12:29.700 fused_ordering(493) 00:12:29.700 fused_ordering(494) 00:12:29.700 fused_ordering(495) 00:12:29.700 fused_ordering(496) 00:12:29.700 fused_ordering(497) 00:12:29.700 fused_ordering(498) 00:12:29.700 fused_ordering(499) 00:12:29.700 fused_ordering(500) 00:12:29.700 fused_ordering(501) 00:12:29.700 fused_ordering(502) 00:12:29.700 fused_ordering(503) 00:12:29.700 fused_ordering(504) 00:12:29.700 fused_ordering(505) 00:12:29.700 fused_ordering(506) 00:12:29.700 fused_ordering(507) 00:12:29.700 fused_ordering(508) 00:12:29.700 fused_ordering(509) 00:12:29.700 fused_ordering(510) 00:12:29.700 fused_ordering(511) 00:12:29.700 fused_ordering(512) 00:12:29.700 fused_ordering(513) 00:12:29.700 fused_ordering(514) 00:12:29.700 fused_ordering(515) 00:12:29.700 fused_ordering(516) 00:12:29.700 fused_ordering(517) 00:12:29.700 fused_ordering(518) 00:12:29.700 fused_ordering(519) 00:12:29.700 fused_ordering(520) 00:12:29.700 fused_ordering(521) 00:12:29.700 fused_ordering(522) 00:12:29.700 fused_ordering(523) 00:12:29.700 fused_ordering(524) 00:12:29.700 fused_ordering(525) 00:12:29.700 fused_ordering(526) 00:12:29.700 fused_ordering(527) 00:12:29.700 fused_ordering(528) 00:12:29.700 fused_ordering(529) 00:12:29.700 fused_ordering(530) 00:12:29.700 fused_ordering(531) 00:12:29.700 fused_ordering(532) 00:12:29.700 fused_ordering(533) 00:12:29.700 fused_ordering(534) 00:12:29.700 fused_ordering(535) 00:12:29.700 fused_ordering(536) 00:12:29.700 fused_ordering(537) 00:12:29.700 fused_ordering(538) 00:12:29.700 fused_ordering(539) 00:12:29.700 fused_ordering(540) 00:12:29.700 fused_ordering(541) 00:12:29.700 fused_ordering(542) 00:12:29.700 fused_ordering(543) 00:12:29.700 fused_ordering(544) 00:12:29.700 fused_ordering(545) 00:12:29.700 fused_ordering(546) 00:12:29.700 fused_ordering(547) 00:12:29.700 fused_ordering(548) 00:12:29.700 fused_ordering(549) 00:12:29.700 fused_ordering(550) 00:12:29.700 fused_ordering(551) 00:12:29.700 fused_ordering(552) 00:12:29.700 fused_ordering(553) 00:12:29.700 fused_ordering(554) 00:12:29.700 fused_ordering(555) 00:12:29.700 fused_ordering(556) 00:12:29.700 fused_ordering(557) 00:12:29.700 fused_ordering(558) 00:12:29.700 fused_ordering(559) 00:12:29.700 fused_ordering(560) 00:12:29.700 fused_ordering(561) 00:12:29.700 fused_ordering(562) 00:12:29.700 fused_ordering(563) 00:12:29.700 fused_ordering(564) 00:12:29.700 fused_ordering(565) 00:12:29.700 fused_ordering(566) 00:12:29.700 fused_ordering(567) 00:12:29.700 fused_ordering(568) 00:12:29.700 fused_ordering(569) 00:12:29.700 fused_ordering(570) 00:12:29.700 fused_ordering(571) 00:12:29.700 fused_ordering(572) 00:12:29.700 fused_ordering(573) 00:12:29.700 fused_ordering(574) 00:12:29.700 fused_ordering(575) 00:12:29.700 fused_ordering(576) 00:12:29.700 fused_ordering(577) 00:12:29.700 fused_ordering(578) 00:12:29.701 fused_ordering(579) 00:12:29.701 fused_ordering(580) 00:12:29.701 fused_ordering(581) 00:12:29.701 fused_ordering(582) 00:12:29.701 fused_ordering(583) 00:12:29.701 fused_ordering(584) 00:12:29.701 fused_ordering(585) 00:12:29.701 fused_ordering(586) 00:12:29.701 fused_ordering(587) 00:12:29.701 fused_ordering(588) 00:12:29.701 fused_ordering(589) 00:12:29.701 fused_ordering(590) 00:12:29.701 fused_ordering(591) 00:12:29.701 fused_ordering(592) 00:12:29.701 fused_ordering(593) 00:12:29.701 fused_ordering(594) 00:12:29.701 fused_ordering(595) 00:12:29.701 fused_ordering(596) 00:12:29.701 fused_ordering(597) 00:12:29.701 fused_ordering(598) 00:12:29.701 fused_ordering(599) 00:12:29.701 fused_ordering(600) 00:12:29.701 fused_ordering(601) 00:12:29.701 fused_ordering(602) 00:12:29.701 fused_ordering(603) 00:12:29.701 fused_ordering(604) 00:12:29.701 fused_ordering(605) 00:12:29.701 fused_ordering(606) 00:12:29.701 fused_ordering(607) 00:12:29.701 fused_ordering(608) 00:12:29.701 fused_ordering(609) 00:12:29.701 fused_ordering(610) 00:12:29.701 fused_ordering(611) 00:12:29.701 fused_ordering(612) 00:12:29.701 fused_ordering(613) 00:12:29.701 fused_ordering(614) 00:12:29.701 fused_ordering(615) 00:12:30.267 fused_ordering(616) 00:12:30.267 fused_ordering(617) 00:12:30.267 fused_ordering(618) 00:12:30.267 fused_ordering(619) 00:12:30.267 fused_ordering(620) 00:12:30.267 fused_ordering(621) 00:12:30.267 fused_ordering(622) 00:12:30.267 fused_ordering(623) 00:12:30.267 fused_ordering(624) 00:12:30.267 fused_ordering(625) 00:12:30.267 fused_ordering(626) 00:12:30.267 fused_ordering(627) 00:12:30.267 fused_ordering(628) 00:12:30.267 fused_ordering(629) 00:12:30.267 fused_ordering(630) 00:12:30.267 fused_ordering(631) 00:12:30.267 fused_ordering(632) 00:12:30.267 fused_ordering(633) 00:12:30.267 fused_ordering(634) 00:12:30.267 fused_ordering(635) 00:12:30.267 fused_ordering(636) 00:12:30.267 fused_ordering(637) 00:12:30.267 fused_ordering(638) 00:12:30.267 fused_ordering(639) 00:12:30.267 fused_ordering(640) 00:12:30.267 fused_ordering(641) 00:12:30.267 fused_ordering(642) 00:12:30.267 fused_ordering(643) 00:12:30.267 fused_ordering(644) 00:12:30.267 fused_ordering(645) 00:12:30.267 fused_ordering(646) 00:12:30.267 fused_ordering(647) 00:12:30.267 fused_ordering(648) 00:12:30.267 fused_ordering(649) 00:12:30.267 fused_ordering(650) 00:12:30.267 fused_ordering(651) 00:12:30.267 fused_ordering(652) 00:12:30.267 fused_ordering(653) 00:12:30.267 fused_ordering(654) 00:12:30.267 fused_ordering(655) 00:12:30.267 fused_ordering(656) 00:12:30.267 fused_ordering(657) 00:12:30.267 fused_ordering(658) 00:12:30.267 fused_ordering(659) 00:12:30.267 fused_ordering(660) 00:12:30.267 fused_ordering(661) 00:12:30.267 fused_ordering(662) 00:12:30.267 fused_ordering(663) 00:12:30.267 fused_ordering(664) 00:12:30.267 fused_ordering(665) 00:12:30.267 fused_ordering(666) 00:12:30.267 fused_ordering(667) 00:12:30.267 fused_ordering(668) 00:12:30.267 fused_ordering(669) 00:12:30.267 fused_ordering(670) 00:12:30.267 fused_ordering(671) 00:12:30.267 fused_ordering(672) 00:12:30.267 fused_ordering(673) 00:12:30.267 fused_ordering(674) 00:12:30.267 fused_ordering(675) 00:12:30.267 fused_ordering(676) 00:12:30.267 fused_ordering(677) 00:12:30.267 fused_ordering(678) 00:12:30.267 fused_ordering(679) 00:12:30.267 fused_ordering(680) 00:12:30.267 fused_ordering(681) 00:12:30.267 fused_ordering(682) 00:12:30.267 fused_ordering(683) 00:12:30.267 fused_ordering(684) 00:12:30.267 fused_ordering(685) 00:12:30.267 fused_ordering(686) 00:12:30.267 fused_ordering(687) 00:12:30.267 fused_ordering(688) 00:12:30.267 fused_ordering(689) 00:12:30.267 fused_ordering(690) 00:12:30.267 fused_ordering(691) 00:12:30.267 fused_ordering(692) 00:12:30.267 fused_ordering(693) 00:12:30.267 fused_ordering(694) 00:12:30.267 fused_ordering(695) 00:12:30.267 fused_ordering(696) 00:12:30.267 fused_ordering(697) 00:12:30.267 fused_ordering(698) 00:12:30.267 fused_ordering(699) 00:12:30.267 fused_ordering(700) 00:12:30.267 fused_ordering(701) 00:12:30.267 fused_ordering(702) 00:12:30.267 fused_ordering(703) 00:12:30.267 fused_ordering(704) 00:12:30.267 fused_ordering(705) 00:12:30.267 fused_ordering(706) 00:12:30.267 fused_ordering(707) 00:12:30.267 fused_ordering(708) 00:12:30.267 fused_ordering(709) 00:12:30.267 fused_ordering(710) 00:12:30.267 fused_ordering(711) 00:12:30.267 fused_ordering(712) 00:12:30.267 fused_ordering(713) 00:12:30.267 fused_ordering(714) 00:12:30.267 fused_ordering(715) 00:12:30.267 fused_ordering(716) 00:12:30.267 fused_ordering(717) 00:12:30.267 fused_ordering(718) 00:12:30.267 fused_ordering(719) 00:12:30.267 fused_ordering(720) 00:12:30.267 fused_ordering(721) 00:12:30.267 fused_ordering(722) 00:12:30.267 fused_ordering(723) 00:12:30.267 fused_ordering(724) 00:12:30.267 fused_ordering(725) 00:12:30.267 fused_ordering(726) 00:12:30.267 fused_ordering(727) 00:12:30.267 fused_ordering(728) 00:12:30.267 fused_ordering(729) 00:12:30.267 fused_ordering(730) 00:12:30.267 fused_ordering(731) 00:12:30.267 fused_ordering(732) 00:12:30.267 fused_ordering(733) 00:12:30.267 fused_ordering(734) 00:12:30.267 fused_ordering(735) 00:12:30.267 fused_ordering(736) 00:12:30.267 fused_ordering(737) 00:12:30.267 fused_ordering(738) 00:12:30.267 fused_ordering(739) 00:12:30.267 fused_ordering(740) 00:12:30.267 fused_ordering(741) 00:12:30.267 fused_ordering(742) 00:12:30.267 fused_ordering(743) 00:12:30.267 fused_ordering(744) 00:12:30.267 fused_ordering(745) 00:12:30.267 fused_ordering(746) 00:12:30.267 fused_ordering(747) 00:12:30.267 fused_ordering(748) 00:12:30.267 fused_ordering(749) 00:12:30.267 fused_ordering(750) 00:12:30.267 fused_ordering(751) 00:12:30.267 fused_ordering(752) 00:12:30.267 fused_ordering(753) 00:12:30.267 fused_ordering(754) 00:12:30.267 fused_ordering(755) 00:12:30.267 fused_ordering(756) 00:12:30.267 fused_ordering(757) 00:12:30.267 fused_ordering(758) 00:12:30.267 fused_ordering(759) 00:12:30.267 fused_ordering(760) 00:12:30.267 fused_ordering(761) 00:12:30.267 fused_ordering(762) 00:12:30.267 fused_ordering(763) 00:12:30.267 fused_ordering(764) 00:12:30.267 fused_ordering(765) 00:12:30.267 fused_ordering(766) 00:12:30.267 fused_ordering(767) 00:12:30.267 fused_ordering(768) 00:12:30.267 fused_ordering(769) 00:12:30.267 fused_ordering(770) 00:12:30.267 fused_ordering(771) 00:12:30.267 fused_ordering(772) 00:12:30.267 fused_ordering(773) 00:12:30.267 fused_ordering(774) 00:12:30.267 fused_ordering(775) 00:12:30.267 fused_ordering(776) 00:12:30.267 fused_ordering(777) 00:12:30.267 fused_ordering(778) 00:12:30.267 fused_ordering(779) 00:12:30.267 fused_ordering(780) 00:12:30.267 fused_ordering(781) 00:12:30.267 fused_ordering(782) 00:12:30.267 fused_ordering(783) 00:12:30.267 fused_ordering(784) 00:12:30.267 fused_ordering(785) 00:12:30.267 fused_ordering(786) 00:12:30.267 fused_ordering(787) 00:12:30.267 fused_ordering(788) 00:12:30.267 fused_ordering(789) 00:12:30.267 fused_ordering(790) 00:12:30.267 fused_ordering(791) 00:12:30.267 fused_ordering(792) 00:12:30.267 fused_ordering(793) 00:12:30.267 fused_ordering(794) 00:12:30.267 fused_ordering(795) 00:12:30.267 fused_ordering(796) 00:12:30.267 fused_ordering(797) 00:12:30.267 fused_ordering(798) 00:12:30.267 fused_ordering(799) 00:12:30.267 fused_ordering(800) 00:12:30.267 fused_ordering(801) 00:12:30.267 fused_ordering(802) 00:12:30.267 fused_ordering(803) 00:12:30.267 fused_ordering(804) 00:12:30.268 fused_ordering(805) 00:12:30.268 fused_ordering(806) 00:12:30.268 fused_ordering(807) 00:12:30.268 fused_ordering(808) 00:12:30.268 fused_ordering(809) 00:12:30.268 fused_ordering(810) 00:12:30.268 fused_ordering(811) 00:12:30.268 fused_ordering(812) 00:12:30.268 fused_ordering(813) 00:12:30.268 fused_ordering(814) 00:12:30.268 fused_ordering(815) 00:12:30.268 fused_ordering(816) 00:12:30.268 fused_ordering(817) 00:12:30.268 fused_ordering(818) 00:12:30.268 fused_ordering(819) 00:12:30.268 fused_ordering(820) 00:12:30.835 fused_ordering(821) 00:12:30.835 fused_ordering(822) 00:12:30.835 fused_ordering(823) 00:12:30.835 fused_ordering(824) 00:12:30.835 fused_ordering(825) 00:12:30.835 fused_ordering(826) 00:12:30.835 fused_ordering(827) 00:12:30.835 fused_ordering(828) 00:12:30.835 fused_ordering(829) 00:12:30.835 fused_ordering(830) 00:12:30.835 fused_ordering(831) 00:12:30.835 fused_ordering(832) 00:12:30.835 fused_ordering(833) 00:12:30.835 fused_ordering(834) 00:12:30.835 fused_ordering(835) 00:12:30.835 fused_ordering(836) 00:12:30.835 fused_ordering(837) 00:12:30.835 fused_ordering(838) 00:12:30.835 fused_ordering(839) 00:12:30.835 fused_ordering(840) 00:12:30.835 fused_ordering(841) 00:12:30.835 fused_ordering(842) 00:12:30.835 fused_ordering(843) 00:12:30.835 fused_ordering(844) 00:12:30.835 fused_ordering(845) 00:12:30.835 fused_ordering(846) 00:12:30.835 fused_ordering(847) 00:12:30.835 fused_ordering(848) 00:12:30.835 fused_ordering(849) 00:12:30.835 fused_ordering(850) 00:12:30.836 fused_ordering(851) 00:12:30.836 fused_ordering(852) 00:12:30.836 fused_ordering(853) 00:12:30.836 fused_ordering(854) 00:12:30.836 fused_ordering(855) 00:12:30.836 fused_ordering(856) 00:12:30.836 fused_ordering(857) 00:12:30.836 fused_ordering(858) 00:12:30.836 fused_ordering(859) 00:12:30.836 fused_ordering(860) 00:12:30.836 fused_ordering(861) 00:12:30.836 fused_ordering(862) 00:12:30.836 fused_ordering(863) 00:12:30.836 fused_ordering(864) 00:12:30.836 fused_ordering(865) 00:12:30.836 fused_ordering(866) 00:12:30.836 fused_ordering(867) 00:12:30.836 fused_ordering(868) 00:12:30.836 fused_ordering(869) 00:12:30.836 fused_ordering(870) 00:12:30.836 fused_ordering(871) 00:12:30.836 fused_ordering(872) 00:12:30.836 fused_ordering(873) 00:12:30.836 fused_ordering(874) 00:12:30.836 fused_ordering(875) 00:12:30.836 fused_ordering(876) 00:12:30.836 fused_ordering(877) 00:12:30.836 fused_ordering(878) 00:12:30.836 fused_ordering(879) 00:12:30.836 fused_ordering(880) 00:12:30.836 fused_ordering(881) 00:12:30.836 fused_ordering(882) 00:12:30.836 fused_ordering(883) 00:12:30.836 fused_ordering(884) 00:12:30.836 fused_ordering(885) 00:12:30.836 fused_ordering(886) 00:12:30.836 fused_ordering(887) 00:12:30.836 fused_ordering(888) 00:12:30.836 fused_ordering(889) 00:12:30.836 fused_ordering(890) 00:12:30.836 fused_ordering(891) 00:12:30.836 fused_ordering(892) 00:12:30.836 fused_ordering(893) 00:12:30.836 fused_ordering(894) 00:12:30.836 fused_ordering(895) 00:12:30.836 fused_ordering(896) 00:12:30.836 fused_ordering(897) 00:12:30.836 fused_ordering(898) 00:12:30.836 fused_ordering(899) 00:12:30.836 fused_ordering(900) 00:12:30.836 fused_ordering(901) 00:12:30.836 fused_ordering(902) 00:12:30.836 fused_ordering(903) 00:12:30.836 fused_ordering(904) 00:12:30.836 fused_ordering(905) 00:12:30.836 fused_ordering(906) 00:12:30.836 fused_ordering(907) 00:12:30.836 fused_ordering(908) 00:12:30.836 fused_ordering(909) 00:12:30.836 fused_ordering(910) 00:12:30.836 fused_ordering(911) 00:12:30.836 fused_ordering(912) 00:12:30.836 fused_ordering(913) 00:12:30.836 fused_ordering(914) 00:12:30.836 fused_ordering(915) 00:12:30.836 fused_ordering(916) 00:12:30.836 fused_ordering(917) 00:12:30.836 fused_ordering(918) 00:12:30.836 fused_ordering(919) 00:12:30.836 fused_ordering(920) 00:12:30.836 fused_ordering(921) 00:12:30.836 fused_ordering(922) 00:12:30.836 fused_ordering(923) 00:12:30.836 fused_ordering(924) 00:12:30.836 fused_ordering(925) 00:12:30.836 fused_ordering(926) 00:12:30.836 fused_ordering(927) 00:12:30.836 fused_ordering(928) 00:12:30.836 fused_ordering(929) 00:12:30.836 fused_ordering(930) 00:12:30.836 fused_ordering(931) 00:12:30.836 fused_ordering(932) 00:12:30.836 fused_ordering(933) 00:12:30.836 fused_ordering(934) 00:12:30.836 fused_ordering(935) 00:12:30.836 fused_ordering(936) 00:12:30.836 fused_ordering(937) 00:12:30.836 fused_ordering(938) 00:12:30.836 fused_ordering(939) 00:12:30.836 fused_ordering(940) 00:12:30.836 fused_ordering(941) 00:12:30.836 fused_ordering(942) 00:12:30.836 fused_ordering(943) 00:12:30.836 fused_ordering(944) 00:12:30.836 fused_ordering(945) 00:12:30.836 fused_ordering(946) 00:12:30.836 fused_ordering(947) 00:12:30.836 fused_ordering(948) 00:12:30.836 fused_ordering(949) 00:12:30.836 fused_ordering(950) 00:12:30.836 fused_ordering(951) 00:12:30.836 fused_ordering(952) 00:12:30.836 fused_ordering(953) 00:12:30.836 fused_ordering(954) 00:12:30.836 fused_ordering(955) 00:12:30.836 fused_ordering(956) 00:12:30.836 fused_ordering(957) 00:12:30.836 fused_ordering(958) 00:12:30.836 fused_ordering(959) 00:12:30.836 fused_ordering(960) 00:12:30.836 fused_ordering(961) 00:12:30.836 fused_ordering(962) 00:12:30.836 fused_ordering(963) 00:12:30.836 fused_ordering(964) 00:12:30.836 fused_ordering(965) 00:12:30.836 fused_ordering(966) 00:12:30.836 fused_ordering(967) 00:12:30.836 fused_ordering(968) 00:12:30.836 fused_ordering(969) 00:12:30.836 fused_ordering(970) 00:12:30.836 fused_ordering(971) 00:12:30.836 fused_ordering(972) 00:12:30.836 fused_ordering(973) 00:12:30.836 fused_ordering(974) 00:12:30.836 fused_ordering(975) 00:12:30.836 fused_ordering(976) 00:12:30.836 fused_ordering(977) 00:12:30.836 fused_ordering(978) 00:12:30.836 fused_ordering(979) 00:12:30.836 fused_ordering(980) 00:12:30.836 fused_ordering(981) 00:12:30.836 fused_ordering(982) 00:12:30.836 fused_ordering(983) 00:12:30.836 fused_ordering(984) 00:12:30.836 fused_ordering(985) 00:12:30.836 fused_ordering(986) 00:12:30.836 fused_ordering(987) 00:12:30.836 fused_ordering(988) 00:12:30.836 fused_ordering(989) 00:12:30.836 fused_ordering(990) 00:12:30.836 fused_ordering(991) 00:12:30.836 fused_ordering(992) 00:12:30.836 fused_ordering(993) 00:12:30.836 fused_ordering(994) 00:12:30.836 fused_ordering(995) 00:12:30.836 fused_ordering(996) 00:12:30.836 fused_ordering(997) 00:12:30.836 fused_ordering(998) 00:12:30.836 fused_ordering(999) 00:12:30.836 fused_ordering(1000) 00:12:30.836 fused_ordering(1001) 00:12:30.836 fused_ordering(1002) 00:12:30.836 fused_ordering(1003) 00:12:30.836 fused_ordering(1004) 00:12:30.836 fused_ordering(1005) 00:12:30.836 fused_ordering(1006) 00:12:30.836 fused_ordering(1007) 00:12:30.836 fused_ordering(1008) 00:12:30.836 fused_ordering(1009) 00:12:30.836 fused_ordering(1010) 00:12:30.836 fused_ordering(1011) 00:12:30.836 fused_ordering(1012) 00:12:30.836 fused_ordering(1013) 00:12:30.836 fused_ordering(1014) 00:12:30.836 fused_ordering(1015) 00:12:30.836 fused_ordering(1016) 00:12:30.836 fused_ordering(1017) 00:12:30.836 fused_ordering(1018) 00:12:30.836 fused_ordering(1019) 00:12:30.836 fused_ordering(1020) 00:12:30.836 fused_ordering(1021) 00:12:30.836 fused_ordering(1022) 00:12:30.836 fused_ordering(1023) 00:12:30.836 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:30.836 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:30.836 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:30.836 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:30.836 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.836 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.837 rmmod nvme_tcp 00:12:30.837 rmmod nvme_fabrics 00:12:30.837 rmmod nvme_keyring 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 75330 ']' 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 75330 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 75330 ']' 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 75330 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75330 00:12:30.837 killing process with pid 75330 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75330' 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 75330 00:12:30.837 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 75330 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:31.094 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:31.095 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:31.095 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:12:31.353 00:12:31.353 real 0m4.588s 00:12:31.353 user 0m5.342s 00:12:31.353 sys 0m1.429s 00:12:31.353 ************************************ 00:12:31.353 END TEST nvmf_fused_ordering 00:12:31.353 ************************************ 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.353 ************************************ 00:12:31.353 START TEST nvmf_ns_masking 00:12:31.353 ************************************ 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.353 * Looking for test storage... 00:12:31.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:12:31.353 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:31.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.612 --rc genhtml_branch_coverage=1 00:12:31.612 --rc genhtml_function_coverage=1 00:12:31.612 --rc genhtml_legend=1 00:12:31.612 --rc geninfo_all_blocks=1 00:12:31.612 --rc geninfo_unexecuted_blocks=1 00:12:31.612 00:12:31.612 ' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:31.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.612 --rc genhtml_branch_coverage=1 00:12:31.612 --rc genhtml_function_coverage=1 00:12:31.612 --rc genhtml_legend=1 00:12:31.612 --rc geninfo_all_blocks=1 00:12:31.612 --rc geninfo_unexecuted_blocks=1 00:12:31.612 00:12:31.612 ' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:31.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.612 --rc genhtml_branch_coverage=1 00:12:31.612 --rc genhtml_function_coverage=1 00:12:31.612 --rc genhtml_legend=1 00:12:31.612 --rc geninfo_all_blocks=1 00:12:31.612 --rc geninfo_unexecuted_blocks=1 00:12:31.612 00:12:31.612 ' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:31.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.612 --rc genhtml_branch_coverage=1 00:12:31.612 --rc genhtml_function_coverage=1 00:12:31.612 --rc genhtml_legend=1 00:12:31.612 --rc geninfo_all_blocks=1 00:12:31.612 --rc geninfo_unexecuted_blocks=1 00:12:31.612 00:12:31.612 ' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.612 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0e45c85b-0333-459c-a25c-f1bdeac5ed02 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d2d19a30-824a-46df-9248-a06abefffbfc 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c4dce468-8f9f-4402-bc8f-f79141a70e74 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:31.613 Cannot find device "nvmf_init_br" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:31.613 Cannot find device "nvmf_init_br2" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:31.613 Cannot find device "nvmf_tgt_br" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.613 Cannot find device "nvmf_tgt_br2" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:31.613 Cannot find device "nvmf_init_br" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:31.613 Cannot find device "nvmf_init_br2" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:31.613 Cannot find device "nvmf_tgt_br" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:31.613 Cannot find device "nvmf_tgt_br2" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:31.613 Cannot find device "nvmf_br" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:31.613 Cannot find device "nvmf_init_if" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:31.613 Cannot find device "nvmf_init_if2" 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:12:31.613 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:31.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:12:31.870 00:12:31.870 --- 10.0.0.3 ping statistics --- 00:12:31.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.870 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:31.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:31.870 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:31.870 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:12:31.870 00:12:31.870 --- 10.0.0.4 ping statistics --- 00:12:31.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.870 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:31.870 00:12:31.870 --- 10.0.0.1 ping statistics --- 00:12:31.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.870 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:31.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:12:31.870 00:12:31.870 --- 10.0.0.2 ping statistics --- 00:12:31.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.870 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # return 0 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.870 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.127 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=75648 00:12:32.127 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:32.127 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 75648 00:12:32.127 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 75648 ']' 00:12:32.127 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.127 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.128 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.128 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.128 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.128 [2024-10-01 15:25:31.096819] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:12:32.128 [2024-10-01 15:25:31.096914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.128 [2024-10-01 15:25:31.234798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.128 [2024-10-01 15:25:31.292127] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.128 [2024-10-01 15:25:31.292187] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.128 [2024-10-01 15:25:31.292199] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.128 [2024-10-01 15:25:31.292207] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.128 [2024-10-01 15:25:31.292215] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.128 [2024-10-01 15:25:31.292243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.385 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:32.684 [2024-10-01 15:25:31.676122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.684 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:32.684 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:32.684 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.942 Malloc1 00:12:32.942 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:33.200 Malloc2 00:12:33.200 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.458 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:33.717 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:33.976 [2024-10-01 15:25:33.087866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:33.976 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:33.976 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4dce468-8f9f-4402-bc8f-f79141a70e74 -a 10.0.0.3 -s 4420 -i 4 00:12:34.241 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.241 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.241 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.241 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:34.241 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.144 [ 0]:0x1 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.144 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.402 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2ef126b85594aa48de75c1495e47c3e 00:12:36.402 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2ef126b85594aa48de75c1495e47c3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.402 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.660 [ 0]:0x1 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2ef126b85594aa48de75c1495e47c3e 00:12:36.660 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2ef126b85594aa48de75c1495e47c3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.661 [ 1]:0x2 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:36.661 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.919 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.178 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4dce468-8f9f-4402-bc8f-f79141a70e74 -a 10.0.0.3 -s 4420 -i 4 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:37.448 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:39.392 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:39.392 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:39.392 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.392 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.650 [ 0]:0x2 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.650 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.908 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:39.908 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.908 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.908 [ 0]:0x1 00:12:39.908 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.908 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2ef126b85594aa48de75c1495e47c3e 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2ef126b85594aa48de75c1495e47c3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.167 [ 1]:0x2 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.167 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:40.425 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.426 [ 0]:0x2 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.426 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.684 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:40.684 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.684 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:40.684 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.684 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.941 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:40.941 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4dce468-8f9f-4402-bc8f-f79141a70e74 -a 10.0.0.3 -s 4420 -i 4 00:12:40.941 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:40.941 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.941 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.941 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:40.942 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:40.942 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.470 [ 0]:0x1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2ef126b85594aa48de75c1495e47c3e 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2ef126b85594aa48de75c1495e47c3e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.470 [ 1]:0x2 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.470 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.471 [ 0]:0x2 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.471 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:43.729 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.987 [2024-10-01 15:25:42.931277] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:43.987 2024/10/01 15:25:42 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:43.987 request: 00:12:43.987 { 00:12:43.987 "method": "nvmf_ns_remove_host", 00:12:43.987 "params": { 00:12:43.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.987 "nsid": 2, 00:12:43.987 "host": "nqn.2016-06.io.spdk:host1" 00:12:43.987 } 00:12:43.987 } 00:12:43.987 Got JSON-RPC error response 00:12:43.987 GoRPCClient: error on JSON-RPC call 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.987 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.987 [ 0]:0x2 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=254a5c73b9c64e6b9ae8b01435dff9a7 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 254a5c73b9c64e6b9ae8b01435dff9a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76016 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76016 /var/tmp/host.sock 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76016 ']' 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:43.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:43.987 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:44.246 [2024-10-01 15:25:43.203441] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:12:44.246 [2024-10-01 15:25:43.203544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76016 ] 00:12:44.246 [2024-10-01 15:25:43.344189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.504 [2024-10-01 15:25:43.433835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.439 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.439 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:45.439 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.697 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:45.956 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0e45c85b-0333-459c-a25c-f1bdeac5ed02 00:12:45.956 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:12:45.956 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0E45C85B0333459CA25CF1BDEAC5ED02 -i 00:12:46.214 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d2d19a30-824a-46df-9248-a06abefffbfc 00:12:46.214 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:12:46.214 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D2D19A30824A46DF9248A06ABEFFFBFC -i 00:12:46.472 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.037 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:47.294 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:47.294 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:47.553 nvme0n1 00:12:47.553 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:47.553 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:48.118 nvme1n2 00:12:48.118 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:48.118 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:48.118 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:48.119 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:48.119 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:48.376 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:48.376 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:48.376 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:48.376 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:48.633 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0e45c85b-0333-459c-a25c-f1bdeac5ed02 == \0\e\4\5\c\8\5\b\-\0\3\3\3\-\4\5\9\c\-\a\2\5\c\-\f\1\b\d\e\a\c\5\e\d\0\2 ]] 00:12:48.633 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:48.633 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:48.633 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d2d19a30-824a-46df-9248-a06abefffbfc == \d\2\d\1\9\a\3\0\-\8\2\4\a\-\4\6\d\f\-\9\2\4\8\-\a\0\6\a\b\e\f\f\f\b\f\c ]] 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 76016 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76016 ']' 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76016 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76016 00:12:49.199 killing process with pid 76016 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76016' 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76016 00:12:49.199 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76016 00:12:49.457 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.715 rmmod nvme_tcp 00:12:49.715 rmmod nvme_fabrics 00:12:49.715 rmmod nvme_keyring 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 75648 ']' 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 75648 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 75648 ']' 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 75648 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75648 00:12:49.715 killing process with pid 75648 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.715 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.716 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75648' 00:12:49.716 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 75648 00:12:49.716 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 75648 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:49.974 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:50.231 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:12:50.232 00:12:50.232 real 0m18.886s 00:12:50.232 user 0m31.286s 00:12:50.232 sys 0m2.730s 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.232 ************************************ 00:12:50.232 END TEST nvmf_ns_masking 00:12:50.232 ************************************ 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.232 ************************************ 00:12:50.232 START TEST nvmf_auth_target 00:12:50.232 ************************************ 00:12:50.232 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:50.491 * Looking for test storage... 00:12:50.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.491 --rc genhtml_branch_coverage=1 00:12:50.491 --rc genhtml_function_coverage=1 00:12:50.491 --rc genhtml_legend=1 00:12:50.491 --rc geninfo_all_blocks=1 00:12:50.491 --rc geninfo_unexecuted_blocks=1 00:12:50.491 00:12:50.491 ' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.491 --rc genhtml_branch_coverage=1 00:12:50.491 --rc genhtml_function_coverage=1 00:12:50.491 --rc genhtml_legend=1 00:12:50.491 --rc geninfo_all_blocks=1 00:12:50.491 --rc geninfo_unexecuted_blocks=1 00:12:50.491 00:12:50.491 ' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.491 --rc genhtml_branch_coverage=1 00:12:50.491 --rc genhtml_function_coverage=1 00:12:50.491 --rc genhtml_legend=1 00:12:50.491 --rc geninfo_all_blocks=1 00:12:50.491 --rc geninfo_unexecuted_blocks=1 00:12:50.491 00:12:50.491 ' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.491 --rc genhtml_branch_coverage=1 00:12:50.491 --rc genhtml_function_coverage=1 00:12:50.491 --rc genhtml_legend=1 00:12:50.491 --rc geninfo_all_blocks=1 00:12:50.491 --rc geninfo_unexecuted_blocks=1 00:12:50.491 00:12:50.491 ' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.491 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:50.492 Cannot find device "nvmf_init_br" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:50.492 Cannot find device "nvmf_init_br2" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:50.492 Cannot find device "nvmf_tgt_br" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.492 Cannot find device "nvmf_tgt_br2" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:50.492 Cannot find device "nvmf_init_br" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:50.492 Cannot find device "nvmf_init_br2" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:50.492 Cannot find device "nvmf_tgt_br" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:50.492 Cannot find device "nvmf_tgt_br2" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:50.492 Cannot find device "nvmf_br" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:50.492 Cannot find device "nvmf_init_if" 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:50.492 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:50.750 Cannot find device "nvmf_init_if2" 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:50.750 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:50.751 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:51.008 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:51.008 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:51.008 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:51.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:51.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:51.008 00:12:51.008 --- 10.0.0.3 ping statistics --- 00:12:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.008 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:51.008 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:51.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:51.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:12:51.008 00:12:51.008 --- 10.0.0.4 ping statistics --- 00:12:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.008 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:51.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:51.009 00:12:51.009 --- 10.0.0.1 ping statistics --- 00:12:51.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.009 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:51.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:12:51.009 00:12:51.009 --- 10.0.0.2 ping statistics --- 00:12:51.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.009 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=76439 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 76439 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 76439 ']' 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.009 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76469 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=22800d072c9e44322305f7edf864862cb4dd6b52b6343537 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.RZV 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 22800d072c9e44322305f7edf864862cb4dd6b52b6343537 0 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 22800d072c9e44322305f7edf864862cb4dd6b52b6343537 0 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=22800d072c9e44322305f7edf864862cb4dd6b52b6343537 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.RZV 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.RZV 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.RZV 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8bafdf7ed0843bcfc36439a892d3f70aab1cb106ef5954c3437bbbc52c766660 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.5Af 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8bafdf7ed0843bcfc36439a892d3f70aab1cb106ef5954c3437bbbc52c766660 3 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8bafdf7ed0843bcfc36439a892d3f70aab1cb106ef5954c3437bbbc52c766660 3 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.284 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.285 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8bafdf7ed0843bcfc36439a892d3f70aab1cb106ef5954c3437bbbc52c766660 00:12:51.285 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:12:51.285 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.5Af 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.5Af 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.5Af 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=0e949291b44f2f09cc41aed94a987fc5 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.n1j 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 0e949291b44f2f09cc41aed94a987fc5 1 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 0e949291b44f2f09cc41aed94a987fc5 1 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=0e949291b44f2f09cc41aed94a987fc5 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.n1j 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.n1j 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.n1j 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7e7509633f5754561756eb6f9bac008f747bfcbc4c5b72ac 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.qs7 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7e7509633f5754561756eb6f9bac008f747bfcbc4c5b72ac 2 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7e7509633f5754561756eb6f9bac008f747bfcbc4c5b72ac 2 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7e7509633f5754561756eb6f9bac008f747bfcbc4c5b72ac 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.qs7 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.qs7 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.qs7 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=d265a347f5b9b7d6535544f517cdebdbaeb39e25edae2d8e 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.mWY 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key d265a347f5b9b7d6535544f517cdebdbaeb39e25edae2d8e 2 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 d265a347f5b9b7d6535544f517cdebdbaeb39e25edae2d8e 2 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=d265a347f5b9b7d6535544f517cdebdbaeb39e25edae2d8e 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.mWY 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.mWY 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.mWY 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.572 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=cceafa4ae5668e5241afbf0e558ed256 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.1Tq 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key cceafa4ae5668e5241afbf0e558ed256 1 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 cceafa4ae5668e5241afbf0e558ed256 1 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=cceafa4ae5668e5241afbf0e558ed256 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:12:51.573 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.1Tq 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.1Tq 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1Tq 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5ce23474aa9fb10621bc008f48ca893c4a6cd4786d0484c89eb430707eabe6af 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.3Dx 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5ce23474aa9fb10621bc008f48ca893c4a6cd4786d0484c89eb430707eabe6af 3 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5ce23474aa9fb10621bc008f48ca893c4a6cd4786d0484c89eb430707eabe6af 3 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5ce23474aa9fb10621bc008f48ca893c4a6cd4786d0484c89eb430707eabe6af 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.3Dx 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.3Dx 00:12:51.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.3Dx 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76439 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 76439 ']' 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.830 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76469 /var/tmp/host.sock 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 76469 ']' 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.088 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.345 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.345 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:52.345 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:52.345 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.345 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RZV 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RZV 00:12:52.603 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RZV 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.5Af ]] 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Af 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Af 00:12:52.860 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Af 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.n1j 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.n1j 00:12:53.118 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.n1j 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.qs7 ]] 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qs7 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qs7 00:12:53.376 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qs7 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mWY 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.mWY 00:12:53.943 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.mWY 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1Tq ]] 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Tq 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Tq 00:12:54.202 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Tq 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3Dx 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.3Dx 00:12:54.460 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.3Dx 00:12:54.717 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:54.717 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:54.717 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.717 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.717 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:54.718 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.976 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.542 00:12:55.542 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.542 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.542 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.800 { 00:12:55.800 "auth": { 00:12:55.800 "dhgroup": "null", 00:12:55.800 "digest": "sha256", 00:12:55.800 "state": "completed" 00:12:55.800 }, 00:12:55.800 "cntlid": 1, 00:12:55.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:12:55.800 "listen_address": { 00:12:55.800 "adrfam": "IPv4", 00:12:55.800 "traddr": "10.0.0.3", 00:12:55.800 "trsvcid": "4420", 00:12:55.800 "trtype": "TCP" 00:12:55.800 }, 00:12:55.800 "peer_address": { 00:12:55.800 "adrfam": "IPv4", 00:12:55.800 "traddr": "10.0.0.1", 00:12:55.800 "trsvcid": "49948", 00:12:55.800 "trtype": "TCP" 00:12:55.800 }, 00:12:55.800 "qid": 0, 00:12:55.800 "state": "enabled", 00:12:55.800 "thread": "nvmf_tgt_poll_group_000" 00:12:55.800 } 00:12:55.800 ]' 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.800 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.061 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:56.061 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.061 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.061 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.061 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.319 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:12:56.319 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:01.581 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.582 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.839 00:13:01.839 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.839 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.839 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.098 { 00:13:02.098 "auth": { 00:13:02.098 "dhgroup": "null", 00:13:02.098 "digest": "sha256", 00:13:02.098 "state": "completed" 00:13:02.098 }, 00:13:02.098 "cntlid": 3, 00:13:02.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:02.098 "listen_address": { 00:13:02.098 "adrfam": "IPv4", 00:13:02.098 "traddr": "10.0.0.3", 00:13:02.098 "trsvcid": "4420", 00:13:02.098 "trtype": "TCP" 00:13:02.098 }, 00:13:02.098 "peer_address": { 00:13:02.098 "adrfam": "IPv4", 00:13:02.098 "traddr": "10.0.0.1", 00:13:02.098 "trsvcid": "49972", 00:13:02.098 "trtype": "TCP" 00:13:02.098 }, 00:13:02.098 "qid": 0, 00:13:02.098 "state": "enabled", 00:13:02.098 "thread": "nvmf_tgt_poll_group_000" 00:13:02.098 } 00:13:02.098 ]' 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.098 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.356 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:02.356 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.356 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.356 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.356 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.613 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:02.613 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:03.548 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.807 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.065 00:13:04.065 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.065 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.065 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.633 { 00:13:04.633 "auth": { 00:13:04.633 "dhgroup": "null", 00:13:04.633 "digest": "sha256", 00:13:04.633 "state": "completed" 00:13:04.633 }, 00:13:04.633 "cntlid": 5, 00:13:04.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:04.633 "listen_address": { 00:13:04.633 "adrfam": "IPv4", 00:13:04.633 "traddr": "10.0.0.3", 00:13:04.633 "trsvcid": "4420", 00:13:04.633 "trtype": "TCP" 00:13:04.633 }, 00:13:04.633 "peer_address": { 00:13:04.633 "adrfam": "IPv4", 00:13:04.633 "traddr": "10.0.0.1", 00:13:04.633 "trsvcid": "38898", 00:13:04.633 "trtype": "TCP" 00:13:04.633 }, 00:13:04.633 "qid": 0, 00:13:04.633 "state": "enabled", 00:13:04.633 "thread": "nvmf_tgt_poll_group_000" 00:13:04.633 } 00:13:04.633 ]' 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.633 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.892 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:04.892 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:05.842 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:06.111 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:06.111 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.111 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.111 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:06.111 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:06.111 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.112 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.370 00:13:06.370 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.370 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.370 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.937 { 00:13:06.937 "auth": { 00:13:06.937 "dhgroup": "null", 00:13:06.937 "digest": "sha256", 00:13:06.937 "state": "completed" 00:13:06.937 }, 00:13:06.937 "cntlid": 7, 00:13:06.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:06.937 "listen_address": { 00:13:06.937 "adrfam": "IPv4", 00:13:06.937 "traddr": "10.0.0.3", 00:13:06.937 "trsvcid": "4420", 00:13:06.937 "trtype": "TCP" 00:13:06.937 }, 00:13:06.937 "peer_address": { 00:13:06.937 "adrfam": "IPv4", 00:13:06.937 "traddr": "10.0.0.1", 00:13:06.937 "trsvcid": "38942", 00:13:06.937 "trtype": "TCP" 00:13:06.937 }, 00:13:06.937 "qid": 0, 00:13:06.937 "state": "enabled", 00:13:06.937 "thread": "nvmf_tgt_poll_group_000" 00:13:06.937 } 00:13:06.937 ]' 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:06.937 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.937 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.937 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.937 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.196 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:07.196 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:08.131 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.390 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.649 00:13:08.649 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.649 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.649 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.907 { 00:13:08.907 "auth": { 00:13:08.907 "dhgroup": "ffdhe2048", 00:13:08.907 "digest": "sha256", 00:13:08.907 "state": "completed" 00:13:08.907 }, 00:13:08.907 "cntlid": 9, 00:13:08.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:08.907 "listen_address": { 00:13:08.907 "adrfam": "IPv4", 00:13:08.907 "traddr": "10.0.0.3", 00:13:08.907 "trsvcid": "4420", 00:13:08.907 "trtype": "TCP" 00:13:08.907 }, 00:13:08.907 "peer_address": { 00:13:08.907 "adrfam": "IPv4", 00:13:08.907 "traddr": "10.0.0.1", 00:13:08.907 "trsvcid": "38972", 00:13:08.907 "trtype": "TCP" 00:13:08.907 }, 00:13:08.907 "qid": 0, 00:13:08.907 "state": "enabled", 00:13:08.907 "thread": "nvmf_tgt_poll_group_000" 00:13:08.907 } 00:13:08.907 ]' 00:13:08.907 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.165 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.424 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:09.424 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:10.358 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.617 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.184 00:13:11.184 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.184 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.184 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.443 { 00:13:11.443 "auth": { 00:13:11.443 "dhgroup": "ffdhe2048", 00:13:11.443 "digest": "sha256", 00:13:11.443 "state": "completed" 00:13:11.443 }, 00:13:11.443 "cntlid": 11, 00:13:11.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:11.443 "listen_address": { 00:13:11.443 "adrfam": "IPv4", 00:13:11.443 "traddr": "10.0.0.3", 00:13:11.443 "trsvcid": "4420", 00:13:11.443 "trtype": "TCP" 00:13:11.443 }, 00:13:11.443 "peer_address": { 00:13:11.443 "adrfam": "IPv4", 00:13:11.443 "traddr": "10.0.0.1", 00:13:11.443 "trsvcid": "38992", 00:13:11.443 "trtype": "TCP" 00:13:11.443 }, 00:13:11.443 "qid": 0, 00:13:11.443 "state": "enabled", 00:13:11.443 "thread": "nvmf_tgt_poll_group_000" 00:13:11.443 } 00:13:11.443 ]' 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.443 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.010 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:12.010 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.578 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.836 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.094 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.094 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.094 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.094 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.352 00:13:13.352 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.352 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.352 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.611 { 00:13:13.611 "auth": { 00:13:13.611 "dhgroup": "ffdhe2048", 00:13:13.611 "digest": "sha256", 00:13:13.611 "state": "completed" 00:13:13.611 }, 00:13:13.611 "cntlid": 13, 00:13:13.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:13.611 "listen_address": { 00:13:13.611 "adrfam": "IPv4", 00:13:13.611 "traddr": "10.0.0.3", 00:13:13.611 "trsvcid": "4420", 00:13:13.611 "trtype": "TCP" 00:13:13.611 }, 00:13:13.611 "peer_address": { 00:13:13.611 "adrfam": "IPv4", 00:13:13.611 "traddr": "10.0.0.1", 00:13:13.611 "trsvcid": "37006", 00:13:13.611 "trtype": "TCP" 00:13:13.611 }, 00:13:13.611 "qid": 0, 00:13:13.611 "state": "enabled", 00:13:13.611 "thread": "nvmf_tgt_poll_group_000" 00:13:13.611 } 00:13:13.611 ]' 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.611 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.869 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:13.869 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.869 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.869 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.869 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.202 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:14.202 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:14.770 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.770 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:14.770 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.770 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.771 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.771 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.771 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:14.771 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.337 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.595 00:13:15.595 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.595 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.595 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.160 { 00:13:16.160 "auth": { 00:13:16.160 "dhgroup": "ffdhe2048", 00:13:16.160 "digest": "sha256", 00:13:16.160 "state": "completed" 00:13:16.160 }, 00:13:16.160 "cntlid": 15, 00:13:16.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:16.160 "listen_address": { 00:13:16.160 "adrfam": "IPv4", 00:13:16.160 "traddr": "10.0.0.3", 00:13:16.160 "trsvcid": "4420", 00:13:16.160 "trtype": "TCP" 00:13:16.160 }, 00:13:16.160 "peer_address": { 00:13:16.160 "adrfam": "IPv4", 00:13:16.160 "traddr": "10.0.0.1", 00:13:16.160 "trsvcid": "37040", 00:13:16.160 "trtype": "TCP" 00:13:16.160 }, 00:13:16.160 "qid": 0, 00:13:16.160 "state": "enabled", 00:13:16.160 "thread": "nvmf_tgt_poll_group_000" 00:13:16.160 } 00:13:16.160 ]' 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.160 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.726 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:16.726 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:17.315 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.592 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.161 00:13:18.161 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.161 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.161 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.419 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.419 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.419 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.419 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.419 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.419 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.419 { 00:13:18.419 "auth": { 00:13:18.419 "dhgroup": "ffdhe3072", 00:13:18.419 "digest": "sha256", 00:13:18.419 "state": "completed" 00:13:18.419 }, 00:13:18.419 "cntlid": 17, 00:13:18.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:18.419 "listen_address": { 00:13:18.419 "adrfam": "IPv4", 00:13:18.419 "traddr": "10.0.0.3", 00:13:18.419 "trsvcid": "4420", 00:13:18.419 "trtype": "TCP" 00:13:18.419 }, 00:13:18.419 "peer_address": { 00:13:18.420 "adrfam": "IPv4", 00:13:18.420 "traddr": "10.0.0.1", 00:13:18.420 "trsvcid": "37050", 00:13:18.420 "trtype": "TCP" 00:13:18.420 }, 00:13:18.420 "qid": 0, 00:13:18.420 "state": "enabled", 00:13:18.420 "thread": "nvmf_tgt_poll_group_000" 00:13:18.420 } 00:13:18.420 ]' 00:13:18.420 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.420 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.420 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.420 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:18.420 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.677 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.677 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.677 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.935 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:18.935 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:19.869 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.869 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.431 00:13:20.432 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.432 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.432 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.715 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.715 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.715 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.715 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.715 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.715 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.715 { 00:13:20.715 "auth": { 00:13:20.715 "dhgroup": "ffdhe3072", 00:13:20.715 "digest": "sha256", 00:13:20.716 "state": "completed" 00:13:20.716 }, 00:13:20.716 "cntlid": 19, 00:13:20.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:20.716 "listen_address": { 00:13:20.716 "adrfam": "IPv4", 00:13:20.716 "traddr": "10.0.0.3", 00:13:20.716 "trsvcid": "4420", 00:13:20.716 "trtype": "TCP" 00:13:20.716 }, 00:13:20.716 "peer_address": { 00:13:20.716 "adrfam": "IPv4", 00:13:20.716 "traddr": "10.0.0.1", 00:13:20.716 "trsvcid": "37076", 00:13:20.716 "trtype": "TCP" 00:13:20.716 }, 00:13:20.716 "qid": 0, 00:13:20.716 "state": "enabled", 00:13:20.716 "thread": "nvmf_tgt_poll_group_000" 00:13:20.716 } 00:13:20.716 ]' 00:13:20.716 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.972 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.231 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:21.231 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:22.164 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.422 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.986 00:13:22.986 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.986 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.986 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.244 { 00:13:23.244 "auth": { 00:13:23.244 "dhgroup": "ffdhe3072", 00:13:23.244 "digest": "sha256", 00:13:23.244 "state": "completed" 00:13:23.244 }, 00:13:23.244 "cntlid": 21, 00:13:23.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:23.244 "listen_address": { 00:13:23.244 "adrfam": "IPv4", 00:13:23.244 "traddr": "10.0.0.3", 00:13:23.244 "trsvcid": "4420", 00:13:23.244 "trtype": "TCP" 00:13:23.244 }, 00:13:23.244 "peer_address": { 00:13:23.244 "adrfam": "IPv4", 00:13:23.244 "traddr": "10.0.0.1", 00:13:23.244 "trsvcid": "50208", 00:13:23.244 "trtype": "TCP" 00:13:23.244 }, 00:13:23.244 "qid": 0, 00:13:23.244 "state": "enabled", 00:13:23.244 "thread": "nvmf_tgt_poll_group_000" 00:13:23.244 } 00:13:23.244 ]' 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.244 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.502 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:23.502 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:24.436 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.695 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.953 00:13:24.953 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.953 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.953 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.211 { 00:13:25.211 "auth": { 00:13:25.211 "dhgroup": "ffdhe3072", 00:13:25.211 "digest": "sha256", 00:13:25.211 "state": "completed" 00:13:25.211 }, 00:13:25.211 "cntlid": 23, 00:13:25.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:25.211 "listen_address": { 00:13:25.211 "adrfam": "IPv4", 00:13:25.211 "traddr": "10.0.0.3", 00:13:25.211 "trsvcid": "4420", 00:13:25.211 "trtype": "TCP" 00:13:25.211 }, 00:13:25.211 "peer_address": { 00:13:25.211 "adrfam": "IPv4", 00:13:25.211 "traddr": "10.0.0.1", 00:13:25.211 "trsvcid": "50250", 00:13:25.211 "trtype": "TCP" 00:13:25.211 }, 00:13:25.211 "qid": 0, 00:13:25.211 "state": "enabled", 00:13:25.211 "thread": "nvmf_tgt_poll_group_000" 00:13:25.211 } 00:13:25.211 ]' 00:13:25.211 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.469 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.725 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:25.725 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:26.658 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.916 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.481 00:13:27.481 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.481 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.481 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.739 { 00:13:27.739 "auth": { 00:13:27.739 "dhgroup": "ffdhe4096", 00:13:27.739 "digest": "sha256", 00:13:27.739 "state": "completed" 00:13:27.739 }, 00:13:27.739 "cntlid": 25, 00:13:27.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:27.739 "listen_address": { 00:13:27.739 "adrfam": "IPv4", 00:13:27.739 "traddr": "10.0.0.3", 00:13:27.739 "trsvcid": "4420", 00:13:27.739 "trtype": "TCP" 00:13:27.739 }, 00:13:27.739 "peer_address": { 00:13:27.739 "adrfam": "IPv4", 00:13:27.739 "traddr": "10.0.0.1", 00:13:27.739 "trsvcid": "50286", 00:13:27.739 "trtype": "TCP" 00:13:27.739 }, 00:13:27.739 "qid": 0, 00:13:27.739 "state": "enabled", 00:13:27.739 "thread": "nvmf_tgt_poll_group_000" 00:13:27.739 } 00:13:27.739 ]' 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.739 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.304 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:28.304 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:28.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:29.440 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:29.440 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.440 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:29.440 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:29.440 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.441 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.698 00:13:29.698 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.698 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.698 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.956 { 00:13:29.956 "auth": { 00:13:29.956 "dhgroup": "ffdhe4096", 00:13:29.956 "digest": "sha256", 00:13:29.956 "state": "completed" 00:13:29.956 }, 00:13:29.956 "cntlid": 27, 00:13:29.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:29.956 "listen_address": { 00:13:29.956 "adrfam": "IPv4", 00:13:29.956 "traddr": "10.0.0.3", 00:13:29.956 "trsvcid": "4420", 00:13:29.956 "trtype": "TCP" 00:13:29.956 }, 00:13:29.956 "peer_address": { 00:13:29.956 "adrfam": "IPv4", 00:13:29.956 "traddr": "10.0.0.1", 00:13:29.956 "trsvcid": "50322", 00:13:29.956 "trtype": "TCP" 00:13:29.956 }, 00:13:29.956 "qid": 0, 00:13:29.956 "state": "enabled", 00:13:29.956 "thread": "nvmf_tgt_poll_group_000" 00:13:29.956 } 00:13:29.956 ]' 00:13:29.956 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.213 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.471 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:30.471 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.405 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.663 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.228 00:13:32.228 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.228 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.228 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.486 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.486 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.486 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.486 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.486 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.486 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.486 { 00:13:32.486 "auth": { 00:13:32.486 "dhgroup": "ffdhe4096", 00:13:32.486 "digest": "sha256", 00:13:32.486 "state": "completed" 00:13:32.486 }, 00:13:32.486 "cntlid": 29, 00:13:32.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:32.486 "listen_address": { 00:13:32.487 "adrfam": "IPv4", 00:13:32.487 "traddr": "10.0.0.3", 00:13:32.487 "trsvcid": "4420", 00:13:32.487 "trtype": "TCP" 00:13:32.487 }, 00:13:32.487 "peer_address": { 00:13:32.487 "adrfam": "IPv4", 00:13:32.487 "traddr": "10.0.0.1", 00:13:32.487 "trsvcid": "50358", 00:13:32.487 "trtype": "TCP" 00:13:32.487 }, 00:13:32.487 "qid": 0, 00:13:32.487 "state": "enabled", 00:13:32.487 "thread": "nvmf_tgt_poll_group_000" 00:13:32.487 } 00:13:32.487 ]' 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.487 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.053 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:33.053 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.619 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.186 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.475 00:13:34.475 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.475 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.475 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.733 { 00:13:34.733 "auth": { 00:13:34.733 "dhgroup": "ffdhe4096", 00:13:34.733 "digest": "sha256", 00:13:34.733 "state": "completed" 00:13:34.733 }, 00:13:34.733 "cntlid": 31, 00:13:34.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:34.733 "listen_address": { 00:13:34.733 "adrfam": "IPv4", 00:13:34.733 "traddr": "10.0.0.3", 00:13:34.733 "trsvcid": "4420", 00:13:34.733 "trtype": "TCP" 00:13:34.733 }, 00:13:34.733 "peer_address": { 00:13:34.733 "adrfam": "IPv4", 00:13:34.733 "traddr": "10.0.0.1", 00:13:34.733 "trsvcid": "57772", 00:13:34.733 "trtype": "TCP" 00:13:34.733 }, 00:13:34.733 "qid": 0, 00:13:34.733 "state": "enabled", 00:13:34.733 "thread": "nvmf_tgt_poll_group_000" 00:13:34.733 } 00:13:34.733 ]' 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.733 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.992 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.992 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.992 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.992 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.992 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.250 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:35.250 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:36.183 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.184 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.442 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.009 00:13:37.009 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.009 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.009 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.267 { 00:13:37.267 "auth": { 00:13:37.267 "dhgroup": "ffdhe6144", 00:13:37.267 "digest": "sha256", 00:13:37.267 "state": "completed" 00:13:37.267 }, 00:13:37.267 "cntlid": 33, 00:13:37.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:37.267 "listen_address": { 00:13:37.267 "adrfam": "IPv4", 00:13:37.267 "traddr": "10.0.0.3", 00:13:37.267 "trsvcid": "4420", 00:13:37.267 "trtype": "TCP" 00:13:37.267 }, 00:13:37.267 "peer_address": { 00:13:37.267 "adrfam": "IPv4", 00:13:37.267 "traddr": "10.0.0.1", 00:13:37.267 "trsvcid": "57788", 00:13:37.267 "trtype": "TCP" 00:13:37.267 }, 00:13:37.267 "qid": 0, 00:13:37.267 "state": "enabled", 00:13:37.267 "thread": "nvmf_tgt_poll_group_000" 00:13:37.267 } 00:13:37.267 ]' 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.267 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.526 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.526 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.526 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.526 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.526 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.788 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:37.788 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:38.740 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.998 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.998 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.998 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.998 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.998 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.998 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.999 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.564 00:13:39.564 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.564 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.564 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.822 { 00:13:39.822 "auth": { 00:13:39.822 "dhgroup": "ffdhe6144", 00:13:39.822 "digest": "sha256", 00:13:39.822 "state": "completed" 00:13:39.822 }, 00:13:39.822 "cntlid": 35, 00:13:39.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:39.822 "listen_address": { 00:13:39.822 "adrfam": "IPv4", 00:13:39.822 "traddr": "10.0.0.3", 00:13:39.822 "trsvcid": "4420", 00:13:39.822 "trtype": "TCP" 00:13:39.822 }, 00:13:39.822 "peer_address": { 00:13:39.822 "adrfam": "IPv4", 00:13:39.822 "traddr": "10.0.0.1", 00:13:39.822 "trsvcid": "57820", 00:13:39.822 "trtype": "TCP" 00:13:39.822 }, 00:13:39.822 "qid": 0, 00:13:39.822 "state": "enabled", 00:13:39.822 "thread": "nvmf_tgt_poll_group_000" 00:13:39.822 } 00:13:39.822 ]' 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.822 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.386 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:40.386 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.953 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.520 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.087 00:13:42.087 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.087 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.087 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.346 { 00:13:42.346 "auth": { 00:13:42.346 "dhgroup": "ffdhe6144", 00:13:42.346 "digest": "sha256", 00:13:42.346 "state": "completed" 00:13:42.346 }, 00:13:42.346 "cntlid": 37, 00:13:42.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:42.346 "listen_address": { 00:13:42.346 "adrfam": "IPv4", 00:13:42.346 "traddr": "10.0.0.3", 00:13:42.346 "trsvcid": "4420", 00:13:42.346 "trtype": "TCP" 00:13:42.346 }, 00:13:42.346 "peer_address": { 00:13:42.346 "adrfam": "IPv4", 00:13:42.346 "traddr": "10.0.0.1", 00:13:42.346 "trsvcid": "57848", 00:13:42.346 "trtype": "TCP" 00:13:42.346 }, 00:13:42.346 "qid": 0, 00:13:42.346 "state": "enabled", 00:13:42.346 "thread": "nvmf_tgt_poll_group_000" 00:13:42.346 } 00:13:42.346 ]' 00:13:42.346 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.347 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.347 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.605 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:42.605 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.605 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.605 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.605 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.864 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:42.864 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.799 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.365 00:13:44.365 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.365 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.365 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.932 { 00:13:44.932 "auth": { 00:13:44.932 "dhgroup": "ffdhe6144", 00:13:44.932 "digest": "sha256", 00:13:44.932 "state": "completed" 00:13:44.932 }, 00:13:44.932 "cntlid": 39, 00:13:44.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:44.932 "listen_address": { 00:13:44.932 "adrfam": "IPv4", 00:13:44.932 "traddr": "10.0.0.3", 00:13:44.932 "trsvcid": "4420", 00:13:44.932 "trtype": "TCP" 00:13:44.932 }, 00:13:44.932 "peer_address": { 00:13:44.932 "adrfam": "IPv4", 00:13:44.932 "traddr": "10.0.0.1", 00:13:44.932 "trsvcid": "42410", 00:13:44.932 "trtype": "TCP" 00:13:44.932 }, 00:13:44.932 "qid": 0, 00:13:44.932 "state": "enabled", 00:13:44.932 "thread": "nvmf_tgt_poll_group_000" 00:13:44.932 } 00:13:44.932 ]' 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.932 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.189 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:45.189 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:46.125 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.383 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.950 00:13:46.950 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.950 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.950 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.208 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.208 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.208 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.208 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.208 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.208 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.208 { 00:13:47.208 "auth": { 00:13:47.209 "dhgroup": "ffdhe8192", 00:13:47.209 "digest": "sha256", 00:13:47.209 "state": "completed" 00:13:47.209 }, 00:13:47.209 "cntlid": 41, 00:13:47.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:47.209 "listen_address": { 00:13:47.209 "adrfam": "IPv4", 00:13:47.209 "traddr": "10.0.0.3", 00:13:47.209 "trsvcid": "4420", 00:13:47.209 "trtype": "TCP" 00:13:47.209 }, 00:13:47.209 "peer_address": { 00:13:47.209 "adrfam": "IPv4", 00:13:47.209 "traddr": "10.0.0.1", 00:13:47.209 "trsvcid": "42448", 00:13:47.209 "trtype": "TCP" 00:13:47.209 }, 00:13:47.209 "qid": 0, 00:13:47.209 "state": "enabled", 00:13:47.209 "thread": "nvmf_tgt_poll_group_000" 00:13:47.209 } 00:13:47.209 ]' 00:13:47.209 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.467 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.034 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:48.034 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.602 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.860 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.794 00:13:49.794 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.794 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.794 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.051 { 00:13:50.051 "auth": { 00:13:50.051 "dhgroup": "ffdhe8192", 00:13:50.051 "digest": "sha256", 00:13:50.051 "state": "completed" 00:13:50.051 }, 00:13:50.051 "cntlid": 43, 00:13:50.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:50.051 "listen_address": { 00:13:50.051 "adrfam": "IPv4", 00:13:50.051 "traddr": "10.0.0.3", 00:13:50.051 "trsvcid": "4420", 00:13:50.051 "trtype": "TCP" 00:13:50.051 }, 00:13:50.051 "peer_address": { 00:13:50.051 "adrfam": "IPv4", 00:13:50.051 "traddr": "10.0.0.1", 00:13:50.051 "trsvcid": "42466", 00:13:50.051 "trtype": "TCP" 00:13:50.051 }, 00:13:50.051 "qid": 0, 00:13:50.051 "state": "enabled", 00:13:50.051 "thread": "nvmf_tgt_poll_group_000" 00:13:50.051 } 00:13:50.051 ]' 00:13:50.051 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.308 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.874 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:50.874 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:51.440 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.698 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.631 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.631 { 00:13:52.631 "auth": { 00:13:52.631 "dhgroup": "ffdhe8192", 00:13:52.631 "digest": "sha256", 00:13:52.631 "state": "completed" 00:13:52.631 }, 00:13:52.631 "cntlid": 45, 00:13:52.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:52.631 "listen_address": { 00:13:52.631 "adrfam": "IPv4", 00:13:52.631 "traddr": "10.0.0.3", 00:13:52.631 "trsvcid": "4420", 00:13:52.631 "trtype": "TCP" 00:13:52.631 }, 00:13:52.631 "peer_address": { 00:13:52.631 "adrfam": "IPv4", 00:13:52.631 "traddr": "10.0.0.1", 00:13:52.631 "trsvcid": "48462", 00:13:52.631 "trtype": "TCP" 00:13:52.631 }, 00:13:52.631 "qid": 0, 00:13:52.631 "state": "enabled", 00:13:52.631 "thread": "nvmf_tgt_poll_group_000" 00:13:52.631 } 00:13:52.631 ]' 00:13:52.631 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.889 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.146 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:53.146 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:54.077 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:54.335 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:54.335 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.335 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.336 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.901 00:13:55.159 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.159 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.159 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.417 { 00:13:55.417 "auth": { 00:13:55.417 "dhgroup": "ffdhe8192", 00:13:55.417 "digest": "sha256", 00:13:55.417 "state": "completed" 00:13:55.417 }, 00:13:55.417 "cntlid": 47, 00:13:55.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:55.417 "listen_address": { 00:13:55.417 "adrfam": "IPv4", 00:13:55.417 "traddr": "10.0.0.3", 00:13:55.417 "trsvcid": "4420", 00:13:55.417 "trtype": "TCP" 00:13:55.417 }, 00:13:55.417 "peer_address": { 00:13:55.417 "adrfam": "IPv4", 00:13:55.417 "traddr": "10.0.0.1", 00:13:55.417 "trsvcid": "48490", 00:13:55.417 "trtype": "TCP" 00:13:55.417 }, 00:13:55.417 "qid": 0, 00:13:55.417 "state": "enabled", 00:13:55.417 "thread": "nvmf_tgt_poll_group_000" 00:13:55.417 } 00:13:55.417 ]' 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.417 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.983 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:55.983 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.550 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.809 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.068 00:13:57.326 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.326 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.326 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.584 { 00:13:57.584 "auth": { 00:13:57.584 "dhgroup": "null", 00:13:57.584 "digest": "sha384", 00:13:57.584 "state": "completed" 00:13:57.584 }, 00:13:57.584 "cntlid": 49, 00:13:57.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:57.584 "listen_address": { 00:13:57.584 "adrfam": "IPv4", 00:13:57.584 "traddr": "10.0.0.3", 00:13:57.584 "trsvcid": "4420", 00:13:57.584 "trtype": "TCP" 00:13:57.584 }, 00:13:57.584 "peer_address": { 00:13:57.584 "adrfam": "IPv4", 00:13:57.584 "traddr": "10.0.0.1", 00:13:57.584 "trsvcid": "48514", 00:13:57.584 "trtype": "TCP" 00:13:57.584 }, 00:13:57.584 "qid": 0, 00:13:57.584 "state": "enabled", 00:13:57.584 "thread": "nvmf_tgt_poll_group_000" 00:13:57.584 } 00:13:57.584 ]' 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.584 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.842 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:57.842 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:58.777 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.036 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.602 00:13:59.602 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.602 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.602 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.861 { 00:13:59.861 "auth": { 00:13:59.861 "dhgroup": "null", 00:13:59.861 "digest": "sha384", 00:13:59.861 "state": "completed" 00:13:59.861 }, 00:13:59.861 "cntlid": 51, 00:13:59.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:13:59.861 "listen_address": { 00:13:59.861 "adrfam": "IPv4", 00:13:59.861 "traddr": "10.0.0.3", 00:13:59.861 "trsvcid": "4420", 00:13:59.861 "trtype": "TCP" 00:13:59.861 }, 00:13:59.861 "peer_address": { 00:13:59.861 "adrfam": "IPv4", 00:13:59.861 "traddr": "10.0.0.1", 00:13:59.861 "trsvcid": "48552", 00:13:59.861 "trtype": "TCP" 00:13:59.861 }, 00:13:59.861 "qid": 0, 00:13:59.861 "state": "enabled", 00:13:59.861 "thread": "nvmf_tgt_poll_group_000" 00:13:59.861 } 00:13:59.861 ]' 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:59.861 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.119 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.119 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.119 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.378 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:00.378 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.313 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.572 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.831 00:14:01.831 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.831 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.831 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.091 { 00:14:02.091 "auth": { 00:14:02.091 "dhgroup": "null", 00:14:02.091 "digest": "sha384", 00:14:02.091 "state": "completed" 00:14:02.091 }, 00:14:02.091 "cntlid": 53, 00:14:02.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:02.091 "listen_address": { 00:14:02.091 "adrfam": "IPv4", 00:14:02.091 "traddr": "10.0.0.3", 00:14:02.091 "trsvcid": "4420", 00:14:02.091 "trtype": "TCP" 00:14:02.091 }, 00:14:02.091 "peer_address": { 00:14:02.091 "adrfam": "IPv4", 00:14:02.091 "traddr": "10.0.0.1", 00:14:02.091 "trsvcid": "48582", 00:14:02.091 "trtype": "TCP" 00:14:02.091 }, 00:14:02.091 "qid": 0, 00:14:02.091 "state": "enabled", 00:14:02.091 "thread": "nvmf_tgt_poll_group_000" 00:14:02.091 } 00:14:02.091 ]' 00:14:02.091 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.349 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.607 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:02.607 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:03.543 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:03.803 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.371 00:14:04.371 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.371 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.371 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.629 { 00:14:04.629 "auth": { 00:14:04.629 "dhgroup": "null", 00:14:04.629 "digest": "sha384", 00:14:04.629 "state": "completed" 00:14:04.629 }, 00:14:04.629 "cntlid": 55, 00:14:04.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:04.629 "listen_address": { 00:14:04.629 "adrfam": "IPv4", 00:14:04.629 "traddr": "10.0.0.3", 00:14:04.629 "trsvcid": "4420", 00:14:04.629 "trtype": "TCP" 00:14:04.629 }, 00:14:04.629 "peer_address": { 00:14:04.629 "adrfam": "IPv4", 00:14:04.629 "traddr": "10.0.0.1", 00:14:04.629 "trsvcid": "39896", 00:14:04.629 "trtype": "TCP" 00:14:04.629 }, 00:14:04.629 "qid": 0, 00:14:04.629 "state": "enabled", 00:14:04.629 "thread": "nvmf_tgt_poll_group_000" 00:14:04.629 } 00:14:04.629 ]' 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.629 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.887 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:04.887 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.887 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.887 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.887 15:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.146 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:05.146 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:06.169 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.169 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:06.169 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.169 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.169 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.735 00:14:06.735 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.736 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.736 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.994 { 00:14:06.994 "auth": { 00:14:06.994 "dhgroup": "ffdhe2048", 00:14:06.994 "digest": "sha384", 00:14:06.994 "state": "completed" 00:14:06.994 }, 00:14:06.994 "cntlid": 57, 00:14:06.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:06.994 "listen_address": { 00:14:06.994 "adrfam": "IPv4", 00:14:06.994 "traddr": "10.0.0.3", 00:14:06.994 "trsvcid": "4420", 00:14:06.994 "trtype": "TCP" 00:14:06.994 }, 00:14:06.994 "peer_address": { 00:14:06.994 "adrfam": "IPv4", 00:14:06.994 "traddr": "10.0.0.1", 00:14:06.994 "trsvcid": "39912", 00:14:06.994 "trtype": "TCP" 00:14:06.994 }, 00:14:06.994 "qid": 0, 00:14:06.994 "state": "enabled", 00:14:06.994 "thread": "nvmf_tgt_poll_group_000" 00:14:06.994 } 00:14:06.994 ]' 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.994 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.251 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.251 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.251 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.509 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:07.509 15:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.703 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.962 00:14:08.962 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.962 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.962 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.535 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.535 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.535 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.535 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.535 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.536 { 00:14:09.536 "auth": { 00:14:09.536 "dhgroup": "ffdhe2048", 00:14:09.536 "digest": "sha384", 00:14:09.536 "state": "completed" 00:14:09.536 }, 00:14:09.536 "cntlid": 59, 00:14:09.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:09.536 "listen_address": { 00:14:09.536 "adrfam": "IPv4", 00:14:09.536 "traddr": "10.0.0.3", 00:14:09.536 "trsvcid": "4420", 00:14:09.536 "trtype": "TCP" 00:14:09.536 }, 00:14:09.536 "peer_address": { 00:14:09.536 "adrfam": "IPv4", 00:14:09.536 "traddr": "10.0.0.1", 00:14:09.536 "trsvcid": "39940", 00:14:09.536 "trtype": "TCP" 00:14:09.536 }, 00:14:09.536 "qid": 0, 00:14:09.536 "state": "enabled", 00:14:09.536 "thread": "nvmf_tgt_poll_group_000" 00:14:09.536 } 00:14:09.536 ]' 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.536 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.819 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:09.819 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:10.754 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.013 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.272 00:14:11.531 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.531 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.531 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.791 { 00:14:11.791 "auth": { 00:14:11.791 "dhgroup": "ffdhe2048", 00:14:11.791 "digest": "sha384", 00:14:11.791 "state": "completed" 00:14:11.791 }, 00:14:11.791 "cntlid": 61, 00:14:11.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:11.791 "listen_address": { 00:14:11.791 "adrfam": "IPv4", 00:14:11.791 "traddr": "10.0.0.3", 00:14:11.791 "trsvcid": "4420", 00:14:11.791 "trtype": "TCP" 00:14:11.791 }, 00:14:11.791 "peer_address": { 00:14:11.791 "adrfam": "IPv4", 00:14:11.791 "traddr": "10.0.0.1", 00:14:11.791 "trsvcid": "39970", 00:14:11.791 "trtype": "TCP" 00:14:11.791 }, 00:14:11.791 "qid": 0, 00:14:11.791 "state": "enabled", 00:14:11.791 "thread": "nvmf_tgt_poll_group_000" 00:14:11.791 } 00:14:11.791 ]' 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.791 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.050 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.316 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:12.316 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:12.883 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:13.141 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:13.709 00:14:13.709 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.709 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.709 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.967 { 00:14:13.967 "auth": { 00:14:13.967 "dhgroup": "ffdhe2048", 00:14:13.967 "digest": "sha384", 00:14:13.967 "state": "completed" 00:14:13.967 }, 00:14:13.967 "cntlid": 63, 00:14:13.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:13.967 "listen_address": { 00:14:13.967 "adrfam": "IPv4", 00:14:13.967 "traddr": "10.0.0.3", 00:14:13.967 "trsvcid": "4420", 00:14:13.967 "trtype": "TCP" 00:14:13.967 }, 00:14:13.967 "peer_address": { 00:14:13.967 "adrfam": "IPv4", 00:14:13.967 "traddr": "10.0.0.1", 00:14:13.967 "trsvcid": "59552", 00:14:13.967 "trtype": "TCP" 00:14:13.967 }, 00:14:13.967 "qid": 0, 00:14:13.967 "state": "enabled", 00:14:13.967 "thread": "nvmf_tgt_poll_group_000" 00:14:13.967 } 00:14:13.967 ]' 00:14:13.967 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.967 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.967 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.967 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.967 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.225 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.225 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.225 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.484 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:14.484 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.418 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.676 15:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.935 00:14:16.193 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.193 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.193 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.450 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.450 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.450 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.450 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.450 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.450 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.450 { 00:14:16.450 "auth": { 00:14:16.450 "dhgroup": "ffdhe3072", 00:14:16.450 "digest": "sha384", 00:14:16.450 "state": "completed" 00:14:16.450 }, 00:14:16.450 "cntlid": 65, 00:14:16.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:16.450 "listen_address": { 00:14:16.450 "adrfam": "IPv4", 00:14:16.450 "traddr": "10.0.0.3", 00:14:16.450 "trsvcid": "4420", 00:14:16.450 "trtype": "TCP" 00:14:16.450 }, 00:14:16.450 "peer_address": { 00:14:16.450 "adrfam": "IPv4", 00:14:16.450 "traddr": "10.0.0.1", 00:14:16.450 "trsvcid": "59568", 00:14:16.450 "trtype": "TCP" 00:14:16.451 }, 00:14:16.451 "qid": 0, 00:14:16.451 "state": "enabled", 00:14:16.451 "thread": "nvmf_tgt_poll_group_000" 00:14:16.451 } 00:14:16.451 ]' 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.451 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.018 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:17.018 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:17.587 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.846 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.846 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.846 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.846 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.846 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.413 00:14:18.413 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.413 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.413 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.672 { 00:14:18.672 "auth": { 00:14:18.672 "dhgroup": "ffdhe3072", 00:14:18.672 "digest": "sha384", 00:14:18.672 "state": "completed" 00:14:18.672 }, 00:14:18.672 "cntlid": 67, 00:14:18.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:18.672 "listen_address": { 00:14:18.672 "adrfam": "IPv4", 00:14:18.672 "traddr": "10.0.0.3", 00:14:18.672 "trsvcid": "4420", 00:14:18.672 "trtype": "TCP" 00:14:18.672 }, 00:14:18.672 "peer_address": { 00:14:18.672 "adrfam": "IPv4", 00:14:18.672 "traddr": "10.0.0.1", 00:14:18.672 "trsvcid": "59594", 00:14:18.672 "trtype": "TCP" 00:14:18.672 }, 00:14:18.672 "qid": 0, 00:14:18.672 "state": "enabled", 00:14:18.672 "thread": "nvmf_tgt_poll_group_000" 00:14:18.672 } 00:14:18.672 ]' 00:14:18.672 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.929 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.929 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.929 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.929 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.930 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.930 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.930 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.188 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:19.188 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:20.133 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.391 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.958 00:14:20.958 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.958 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.958 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.221 { 00:14:21.221 "auth": { 00:14:21.221 "dhgroup": "ffdhe3072", 00:14:21.221 "digest": "sha384", 00:14:21.221 "state": "completed" 00:14:21.221 }, 00:14:21.221 "cntlid": 69, 00:14:21.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:21.221 "listen_address": { 00:14:21.221 "adrfam": "IPv4", 00:14:21.221 "traddr": "10.0.0.3", 00:14:21.221 "trsvcid": "4420", 00:14:21.221 "trtype": "TCP" 00:14:21.221 }, 00:14:21.221 "peer_address": { 00:14:21.221 "adrfam": "IPv4", 00:14:21.221 "traddr": "10.0.0.1", 00:14:21.221 "trsvcid": "59636", 00:14:21.221 "trtype": "TCP" 00:14:21.221 }, 00:14:21.221 "qid": 0, 00:14:21.221 "state": "enabled", 00:14:21.221 "thread": "nvmf_tgt_poll_group_000" 00:14:21.221 } 00:14:21.221 ]' 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.221 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.491 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:21.491 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.491 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.491 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.491 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.749 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:21.749 15:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:22.681 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.937 15:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.502 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.502 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.762 { 00:14:23.762 "auth": { 00:14:23.762 "dhgroup": "ffdhe3072", 00:14:23.762 "digest": "sha384", 00:14:23.762 "state": "completed" 00:14:23.762 }, 00:14:23.762 "cntlid": 71, 00:14:23.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:23.762 "listen_address": { 00:14:23.762 "adrfam": "IPv4", 00:14:23.762 "traddr": "10.0.0.3", 00:14:23.762 "trsvcid": "4420", 00:14:23.762 "trtype": "TCP" 00:14:23.762 }, 00:14:23.762 "peer_address": { 00:14:23.762 "adrfam": "IPv4", 00:14:23.762 "traddr": "10.0.0.1", 00:14:23.762 "trsvcid": "57172", 00:14:23.762 "trtype": "TCP" 00:14:23.762 }, 00:14:23.762 "qid": 0, 00:14:23.762 "state": "enabled", 00:14:23.762 "thread": "nvmf_tgt_poll_group_000" 00:14:23.762 } 00:14:23.762 ]' 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.762 15:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.019 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:24.020 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.952 15:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.210 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.776 00:14:25.776 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.776 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.776 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.033 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.033 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.033 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.033 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.033 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.033 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.033 { 00:14:26.033 "auth": { 00:14:26.033 "dhgroup": "ffdhe4096", 00:14:26.033 "digest": "sha384", 00:14:26.033 "state": "completed" 00:14:26.033 }, 00:14:26.033 "cntlid": 73, 00:14:26.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:26.033 "listen_address": { 00:14:26.033 "adrfam": "IPv4", 00:14:26.033 "traddr": "10.0.0.3", 00:14:26.033 "trsvcid": "4420", 00:14:26.033 "trtype": "TCP" 00:14:26.033 }, 00:14:26.033 "peer_address": { 00:14:26.033 "adrfam": "IPv4", 00:14:26.033 "traddr": "10.0.0.1", 00:14:26.033 "trsvcid": "57206", 00:14:26.033 "trtype": "TCP" 00:14:26.033 }, 00:14:26.033 "qid": 0, 00:14:26.033 "state": "enabled", 00:14:26.033 "thread": "nvmf_tgt_poll_group_000" 00:14:26.033 } 00:14:26.034 ]' 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.034 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.599 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:26.599 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:27.543 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.800 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.059 00:14:28.316 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.316 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.316 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.573 { 00:14:28.573 "auth": { 00:14:28.573 "dhgroup": "ffdhe4096", 00:14:28.573 "digest": "sha384", 00:14:28.573 "state": "completed" 00:14:28.573 }, 00:14:28.573 "cntlid": 75, 00:14:28.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:28.573 "listen_address": { 00:14:28.573 "adrfam": "IPv4", 00:14:28.573 "traddr": "10.0.0.3", 00:14:28.573 "trsvcid": "4420", 00:14:28.573 "trtype": "TCP" 00:14:28.573 }, 00:14:28.573 "peer_address": { 00:14:28.573 "adrfam": "IPv4", 00:14:28.573 "traddr": "10.0.0.1", 00:14:28.573 "trsvcid": "57230", 00:14:28.573 "trtype": "TCP" 00:14:28.573 }, 00:14:28.573 "qid": 0, 00:14:28.573 "state": "enabled", 00:14:28.573 "thread": "nvmf_tgt_poll_group_000" 00:14:28.573 } 00:14:28.573 ]' 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.573 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.832 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.832 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.832 15:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.089 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:29.089 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:30.031 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.031 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.599 00:14:30.599 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.599 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.599 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.900 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.900 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.900 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.900 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.900 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.900 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.900 { 00:14:30.900 "auth": { 00:14:30.900 "dhgroup": "ffdhe4096", 00:14:30.900 "digest": "sha384", 00:14:30.900 "state": "completed" 00:14:30.900 }, 00:14:30.900 "cntlid": 77, 00:14:30.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:30.900 "listen_address": { 00:14:30.900 "adrfam": "IPv4", 00:14:30.900 "traddr": "10.0.0.3", 00:14:30.900 "trsvcid": "4420", 00:14:30.900 "trtype": "TCP" 00:14:30.900 }, 00:14:30.900 "peer_address": { 00:14:30.900 "adrfam": "IPv4", 00:14:30.900 "traddr": "10.0.0.1", 00:14:30.900 "trsvcid": "57248", 00:14:30.900 "trtype": "TCP" 00:14:30.900 }, 00:14:30.900 "qid": 0, 00:14:30.900 "state": "enabled", 00:14:30.900 "thread": "nvmf_tgt_poll_group_000" 00:14:30.900 } 00:14:30.900 ]' 00:14:30.900 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.158 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.416 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:31.416 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:32.354 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:32.612 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:32.612 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.612 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:32.612 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.613 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.179 00:14:33.179 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.179 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.179 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.438 { 00:14:33.438 "auth": { 00:14:33.438 "dhgroup": "ffdhe4096", 00:14:33.438 "digest": "sha384", 00:14:33.438 "state": "completed" 00:14:33.438 }, 00:14:33.438 "cntlid": 79, 00:14:33.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:33.438 "listen_address": { 00:14:33.438 "adrfam": "IPv4", 00:14:33.438 "traddr": "10.0.0.3", 00:14:33.438 "trsvcid": "4420", 00:14:33.438 "trtype": "TCP" 00:14:33.438 }, 00:14:33.438 "peer_address": { 00:14:33.438 "adrfam": "IPv4", 00:14:33.438 "traddr": "10.0.0.1", 00:14:33.438 "trsvcid": "56630", 00:14:33.438 "trtype": "TCP" 00:14:33.438 }, 00:14:33.438 "qid": 0, 00:14:33.438 "state": "enabled", 00:14:33.438 "thread": "nvmf_tgt_poll_group_000" 00:14:33.438 } 00:14:33.438 ]' 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.438 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.697 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.697 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.697 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.955 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:33.955 15:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:34.522 15:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.089 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.348 00:14:35.605 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.606 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.606 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.863 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.864 { 00:14:35.864 "auth": { 00:14:35.864 "dhgroup": "ffdhe6144", 00:14:35.864 "digest": "sha384", 00:14:35.864 "state": "completed" 00:14:35.864 }, 00:14:35.864 "cntlid": 81, 00:14:35.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:35.864 "listen_address": { 00:14:35.864 "adrfam": "IPv4", 00:14:35.864 "traddr": "10.0.0.3", 00:14:35.864 "trsvcid": "4420", 00:14:35.864 "trtype": "TCP" 00:14:35.864 }, 00:14:35.864 "peer_address": { 00:14:35.864 "adrfam": "IPv4", 00:14:35.864 "traddr": "10.0.0.1", 00:14:35.864 "trsvcid": "56648", 00:14:35.864 "trtype": "TCP" 00:14:35.864 }, 00:14:35.864 "qid": 0, 00:14:35.864 "state": "enabled", 00:14:35.864 "thread": "nvmf_tgt_poll_group_000" 00:14:35.864 } 00:14:35.864 ]' 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.864 15:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.122 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.122 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.122 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.380 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:36.380 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:36.950 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.950 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:36.950 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.950 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.209 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.209 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.209 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:37.209 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.467 15:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.034 00:14:38.034 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.034 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.034 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.291 { 00:14:38.291 "auth": { 00:14:38.291 "dhgroup": "ffdhe6144", 00:14:38.291 "digest": "sha384", 00:14:38.291 "state": "completed" 00:14:38.291 }, 00:14:38.291 "cntlid": 83, 00:14:38.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:38.291 "listen_address": { 00:14:38.291 "adrfam": "IPv4", 00:14:38.291 "traddr": "10.0.0.3", 00:14:38.291 "trsvcid": "4420", 00:14:38.291 "trtype": "TCP" 00:14:38.291 }, 00:14:38.291 "peer_address": { 00:14:38.291 "adrfam": "IPv4", 00:14:38.291 "traddr": "10.0.0.1", 00:14:38.291 "trsvcid": "56684", 00:14:38.291 "trtype": "TCP" 00:14:38.291 }, 00:14:38.291 "qid": 0, 00:14:38.291 "state": "enabled", 00:14:38.291 "thread": "nvmf_tgt_poll_group_000" 00:14:38.291 } 00:14:38.291 ]' 00:14:38.291 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.549 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.115 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:39.115 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:39.682 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.940 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.505 00:14:40.505 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.505 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.505 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.762 { 00:14:40.762 "auth": { 00:14:40.762 "dhgroup": "ffdhe6144", 00:14:40.762 "digest": "sha384", 00:14:40.762 "state": "completed" 00:14:40.762 }, 00:14:40.762 "cntlid": 85, 00:14:40.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:40.762 "listen_address": { 00:14:40.762 "adrfam": "IPv4", 00:14:40.762 "traddr": "10.0.0.3", 00:14:40.762 "trsvcid": "4420", 00:14:40.762 "trtype": "TCP" 00:14:40.762 }, 00:14:40.762 "peer_address": { 00:14:40.762 "adrfam": "IPv4", 00:14:40.762 "traddr": "10.0.0.1", 00:14:40.762 "trsvcid": "56704", 00:14:40.762 "trtype": "TCP" 00:14:40.762 }, 00:14:40.762 "qid": 0, 00:14:40.762 "state": "enabled", 00:14:40.762 "thread": "nvmf_tgt_poll_group_000" 00:14:40.762 } 00:14:40.762 ]' 00:14:40.762 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.020 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.020 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.020 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.020 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.020 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.020 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.020 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.279 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:41.279 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:42.220 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.478 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.479 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.479 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.044 00:14:43.044 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.044 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.044 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.302 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.302 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.302 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.302 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.560 { 00:14:43.560 "auth": { 00:14:43.560 "dhgroup": "ffdhe6144", 00:14:43.560 "digest": "sha384", 00:14:43.560 "state": "completed" 00:14:43.560 }, 00:14:43.560 "cntlid": 87, 00:14:43.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:43.560 "listen_address": { 00:14:43.560 "adrfam": "IPv4", 00:14:43.560 "traddr": "10.0.0.3", 00:14:43.560 "trsvcid": "4420", 00:14:43.560 "trtype": "TCP" 00:14:43.560 }, 00:14:43.560 "peer_address": { 00:14:43.560 "adrfam": "IPv4", 00:14:43.560 "traddr": "10.0.0.1", 00:14:43.560 "trsvcid": "58540", 00:14:43.560 "trtype": "TCP" 00:14:43.560 }, 00:14:43.560 "qid": 0, 00:14:43.560 "state": "enabled", 00:14:43.560 "thread": "nvmf_tgt_poll_group_000" 00:14:43.560 } 00:14:43.560 ]' 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.560 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.126 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:44.126 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:44.692 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:45.258 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:45.258 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.258 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:45.258 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.259 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.825 00:14:45.825 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.825 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.825 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.084 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.084 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.084 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.084 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.084 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.084 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.084 { 00:14:46.084 "auth": { 00:14:46.084 "dhgroup": "ffdhe8192", 00:14:46.084 "digest": "sha384", 00:14:46.084 "state": "completed" 00:14:46.084 }, 00:14:46.084 "cntlid": 89, 00:14:46.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:46.084 "listen_address": { 00:14:46.084 "adrfam": "IPv4", 00:14:46.084 "traddr": "10.0.0.3", 00:14:46.084 "trsvcid": "4420", 00:14:46.084 "trtype": "TCP" 00:14:46.084 }, 00:14:46.084 "peer_address": { 00:14:46.084 "adrfam": "IPv4", 00:14:46.084 "traddr": "10.0.0.1", 00:14:46.084 "trsvcid": "58562", 00:14:46.084 "trtype": "TCP" 00:14:46.084 }, 00:14:46.084 "qid": 0, 00:14:46.084 "state": "enabled", 00:14:46.084 "thread": "nvmf_tgt_poll_group_000" 00:14:46.084 } 00:14:46.084 ]' 00:14:46.085 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.343 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.602 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:46.602 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:47.537 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.794 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.361 00:14:48.361 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.361 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.361 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.927 { 00:14:48.927 "auth": { 00:14:48.927 "dhgroup": "ffdhe8192", 00:14:48.927 "digest": "sha384", 00:14:48.927 "state": "completed" 00:14:48.927 }, 00:14:48.927 "cntlid": 91, 00:14:48.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:48.927 "listen_address": { 00:14:48.927 "adrfam": "IPv4", 00:14:48.927 "traddr": "10.0.0.3", 00:14:48.927 "trsvcid": "4420", 00:14:48.927 "trtype": "TCP" 00:14:48.927 }, 00:14:48.927 "peer_address": { 00:14:48.927 "adrfam": "IPv4", 00:14:48.927 "traddr": "10.0.0.1", 00:14:48.927 "trsvcid": "58580", 00:14:48.927 "trtype": "TCP" 00:14:48.927 }, 00:14:48.927 "qid": 0, 00:14:48.927 "state": "enabled", 00:14:48.927 "thread": "nvmf_tgt_poll_group_000" 00:14:48.927 } 00:14:48.927 ]' 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.927 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.927 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.927 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.927 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.185 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:49.185 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.120 15:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:50.120 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.379 15:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.945 00:14:50.946 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.946 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.946 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.510 { 00:14:51.510 "auth": { 00:14:51.510 "dhgroup": "ffdhe8192", 00:14:51.510 "digest": "sha384", 00:14:51.510 "state": "completed" 00:14:51.510 }, 00:14:51.510 "cntlid": 93, 00:14:51.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:51.510 "listen_address": { 00:14:51.510 "adrfam": "IPv4", 00:14:51.510 "traddr": "10.0.0.3", 00:14:51.510 "trsvcid": "4420", 00:14:51.510 "trtype": "TCP" 00:14:51.510 }, 00:14:51.510 "peer_address": { 00:14:51.510 "adrfam": "IPv4", 00:14:51.510 "traddr": "10.0.0.1", 00:14:51.510 "trsvcid": "58600", 00:14:51.510 "trtype": "TCP" 00:14:51.510 }, 00:14:51.510 "qid": 0, 00:14:51.510 "state": "enabled", 00:14:51.510 "thread": "nvmf_tgt_poll_group_000" 00:14:51.510 } 00:14:51.510 ]' 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.510 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.767 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:51.767 15:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.700 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.958 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.523 00:14:53.523 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.523 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.523 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.089 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.089 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.089 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.089 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.089 { 00:14:54.089 "auth": { 00:14:54.089 "dhgroup": "ffdhe8192", 00:14:54.089 "digest": "sha384", 00:14:54.089 "state": "completed" 00:14:54.089 }, 00:14:54.089 "cntlid": 95, 00:14:54.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:54.089 "listen_address": { 00:14:54.089 "adrfam": "IPv4", 00:14:54.089 "traddr": "10.0.0.3", 00:14:54.089 "trsvcid": "4420", 00:14:54.089 "trtype": "TCP" 00:14:54.089 }, 00:14:54.089 "peer_address": { 00:14:54.089 "adrfam": "IPv4", 00:14:54.089 "traddr": "10.0.0.1", 00:14:54.089 "trsvcid": "43606", 00:14:54.089 "trtype": "TCP" 00:14:54.089 }, 00:14:54.089 "qid": 0, 00:14:54.089 "state": "enabled", 00:14:54.089 "thread": "nvmf_tgt_poll_group_000" 00:14:54.089 } 00:14:54.089 ]' 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.089 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.654 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:54.654 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:55.220 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.786 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.044 00:14:56.044 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.044 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.044 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.303 { 00:14:56.303 "auth": { 00:14:56.303 "dhgroup": "null", 00:14:56.303 "digest": "sha512", 00:14:56.303 "state": "completed" 00:14:56.303 }, 00:14:56.303 "cntlid": 97, 00:14:56.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:56.303 "listen_address": { 00:14:56.303 "adrfam": "IPv4", 00:14:56.303 "traddr": "10.0.0.3", 00:14:56.303 "trsvcid": "4420", 00:14:56.303 "trtype": "TCP" 00:14:56.303 }, 00:14:56.303 "peer_address": { 00:14:56.303 "adrfam": "IPv4", 00:14:56.303 "traddr": "10.0.0.1", 00:14:56.303 "trsvcid": "43644", 00:14:56.303 "trtype": "TCP" 00:14:56.303 }, 00:14:56.303 "qid": 0, 00:14:56.303 "state": "enabled", 00:14:56.303 "thread": "nvmf_tgt_poll_group_000" 00:14:56.303 } 00:14:56.303 ]' 00:14:56.303 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.561 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.818 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:56.818 15:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:57.752 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:58.009 15:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.009 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.267 00:14:58.267 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.267 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.267 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.832 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.832 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.833 { 00:14:58.833 "auth": { 00:14:58.833 "dhgroup": "null", 00:14:58.833 "digest": "sha512", 00:14:58.833 "state": "completed" 00:14:58.833 }, 00:14:58.833 "cntlid": 99, 00:14:58.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:14:58.833 "listen_address": { 00:14:58.833 "adrfam": "IPv4", 00:14:58.833 "traddr": "10.0.0.3", 00:14:58.833 "trsvcid": "4420", 00:14:58.833 "trtype": "TCP" 00:14:58.833 }, 00:14:58.833 "peer_address": { 00:14:58.833 "adrfam": "IPv4", 00:14:58.833 "traddr": "10.0.0.1", 00:14:58.833 "trsvcid": "43676", 00:14:58.833 "trtype": "TCP" 00:14:58.833 }, 00:14:58.833 "qid": 0, 00:14:58.833 "state": "enabled", 00:14:58.833 "thread": "nvmf_tgt_poll_group_000" 00:14:58.833 } 00:14:58.833 ]' 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.833 15:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.091 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:14:59.091 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.028 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.285 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.286 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.543 00:15:00.543 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.543 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.543 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.801 { 00:15:00.801 "auth": { 00:15:00.801 "dhgroup": "null", 00:15:00.801 "digest": "sha512", 00:15:00.801 "state": "completed" 00:15:00.801 }, 00:15:00.801 "cntlid": 101, 00:15:00.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:00.801 "listen_address": { 00:15:00.801 "adrfam": "IPv4", 00:15:00.801 "traddr": "10.0.0.3", 00:15:00.801 "trsvcid": "4420", 00:15:00.801 "trtype": "TCP" 00:15:00.801 }, 00:15:00.801 "peer_address": { 00:15:00.801 "adrfam": "IPv4", 00:15:00.801 "traddr": "10.0.0.1", 00:15:00.801 "trsvcid": "43712", 00:15:00.801 "trtype": "TCP" 00:15:00.801 }, 00:15:00.801 "qid": 0, 00:15:00.801 "state": "enabled", 00:15:00.801 "thread": "nvmf_tgt_poll_group_000" 00:15:00.801 } 00:15:00.801 ]' 00:15:00.801 15:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.059 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.316 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:01.316 15:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.250 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.508 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.767 00:15:02.767 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.767 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.767 15:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.384 { 00:15:03.384 "auth": { 00:15:03.384 "dhgroup": "null", 00:15:03.384 "digest": "sha512", 00:15:03.384 "state": "completed" 00:15:03.384 }, 00:15:03.384 "cntlid": 103, 00:15:03.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:03.384 "listen_address": { 00:15:03.384 "adrfam": "IPv4", 00:15:03.384 "traddr": "10.0.0.3", 00:15:03.384 "trsvcid": "4420", 00:15:03.384 "trtype": "TCP" 00:15:03.384 }, 00:15:03.384 "peer_address": { 00:15:03.384 "adrfam": "IPv4", 00:15:03.384 "traddr": "10.0.0.1", 00:15:03.384 "trsvcid": "35660", 00:15:03.384 "trtype": "TCP" 00:15:03.384 }, 00:15:03.384 "qid": 0, 00:15:03.384 "state": "enabled", 00:15:03.384 "thread": "nvmf_tgt_poll_group_000" 00:15:03.384 } 00:15:03.384 ]' 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.384 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.642 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:03.642 15:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:04.578 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.836 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.094 00:15:05.095 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.095 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.095 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.661 { 00:15:05.661 "auth": { 00:15:05.661 "dhgroup": "ffdhe2048", 00:15:05.661 "digest": "sha512", 00:15:05.661 "state": "completed" 00:15:05.661 }, 00:15:05.661 "cntlid": 105, 00:15:05.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:05.661 "listen_address": { 00:15:05.661 "adrfam": "IPv4", 00:15:05.661 "traddr": "10.0.0.3", 00:15:05.661 "trsvcid": "4420", 00:15:05.661 "trtype": "TCP" 00:15:05.661 }, 00:15:05.661 "peer_address": { 00:15:05.661 "adrfam": "IPv4", 00:15:05.661 "traddr": "10.0.0.1", 00:15:05.661 "trsvcid": "35678", 00:15:05.661 "trtype": "TCP" 00:15:05.661 }, 00:15:05.661 "qid": 0, 00:15:05.661 "state": "enabled", 00:15:05.661 "thread": "nvmf_tgt_poll_group_000" 00:15:05.661 } 00:15:05.661 ]' 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.661 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.229 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:06.229 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:06.795 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.361 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.362 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.362 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.362 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.362 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.362 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.362 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.619 00:15:07.619 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.619 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.619 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.186 { 00:15:08.186 "auth": { 00:15:08.186 "dhgroup": "ffdhe2048", 00:15:08.186 "digest": "sha512", 00:15:08.186 "state": "completed" 00:15:08.186 }, 00:15:08.186 "cntlid": 107, 00:15:08.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:08.186 "listen_address": { 00:15:08.186 "adrfam": "IPv4", 00:15:08.186 "traddr": "10.0.0.3", 00:15:08.186 "trsvcid": "4420", 00:15:08.186 "trtype": "TCP" 00:15:08.186 }, 00:15:08.186 "peer_address": { 00:15:08.186 "adrfam": "IPv4", 00:15:08.186 "traddr": "10.0.0.1", 00:15:08.186 "trsvcid": "35706", 00:15:08.186 "trtype": "TCP" 00:15:08.186 }, 00:15:08.186 "qid": 0, 00:15:08.186 "state": "enabled", 00:15:08.186 "thread": "nvmf_tgt_poll_group_000" 00:15:08.186 } 00:15:08.186 ]' 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.186 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.444 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:08.444 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:09.379 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.637 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.203 00:15:10.203 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.203 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.203 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.462 { 00:15:10.462 "auth": { 00:15:10.462 "dhgroup": "ffdhe2048", 00:15:10.462 "digest": "sha512", 00:15:10.462 "state": "completed" 00:15:10.462 }, 00:15:10.462 "cntlid": 109, 00:15:10.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:10.462 "listen_address": { 00:15:10.462 "adrfam": "IPv4", 00:15:10.462 "traddr": "10.0.0.3", 00:15:10.462 "trsvcid": "4420", 00:15:10.462 "trtype": "TCP" 00:15:10.462 }, 00:15:10.462 "peer_address": { 00:15:10.462 "adrfam": "IPv4", 00:15:10.462 "traddr": "10.0.0.1", 00:15:10.462 "trsvcid": "35738", 00:15:10.462 "trtype": "TCP" 00:15:10.462 }, 00:15:10.462 "qid": 0, 00:15:10.462 "state": "enabled", 00:15:10.462 "thread": "nvmf_tgt_poll_group_000" 00:15:10.462 } 00:15:10.462 ]' 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.462 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.721 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.721 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.721 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.721 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.721 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.979 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:10.979 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:11.913 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.171 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.430 00:15:12.430 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.430 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.430 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.688 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.688 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.688 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.688 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.946 { 00:15:12.946 "auth": { 00:15:12.946 "dhgroup": "ffdhe2048", 00:15:12.946 "digest": "sha512", 00:15:12.946 "state": "completed" 00:15:12.946 }, 00:15:12.946 "cntlid": 111, 00:15:12.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:12.946 "listen_address": { 00:15:12.946 "adrfam": "IPv4", 00:15:12.946 "traddr": "10.0.0.3", 00:15:12.946 "trsvcid": "4420", 00:15:12.946 "trtype": "TCP" 00:15:12.946 }, 00:15:12.946 "peer_address": { 00:15:12.946 "adrfam": "IPv4", 00:15:12.946 "traddr": "10.0.0.1", 00:15:12.946 "trsvcid": "50034", 00:15:12.946 "trtype": "TCP" 00:15:12.946 }, 00:15:12.946 "qid": 0, 00:15:12.946 "state": "enabled", 00:15:12.946 "thread": "nvmf_tgt_poll_group_000" 00:15:12.946 } 00:15:12.946 ]' 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.946 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.946 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.946 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.946 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.512 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:13.512 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:14.079 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.645 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.925 00:15:14.925 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.925 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.925 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.186 { 00:15:15.186 "auth": { 00:15:15.186 "dhgroup": "ffdhe3072", 00:15:15.186 "digest": "sha512", 00:15:15.186 "state": "completed" 00:15:15.186 }, 00:15:15.186 "cntlid": 113, 00:15:15.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:15.186 "listen_address": { 00:15:15.186 "adrfam": "IPv4", 00:15:15.186 "traddr": "10.0.0.3", 00:15:15.186 "trsvcid": "4420", 00:15:15.186 "trtype": "TCP" 00:15:15.186 }, 00:15:15.186 "peer_address": { 00:15:15.186 "adrfam": "IPv4", 00:15:15.186 "traddr": "10.0.0.1", 00:15:15.186 "trsvcid": "50054", 00:15:15.186 "trtype": "TCP" 00:15:15.186 }, 00:15:15.186 "qid": 0, 00:15:15.186 "state": "enabled", 00:15:15.186 "thread": "nvmf_tgt_poll_group_000" 00:15:15.186 } 00:15:15.186 ]' 00:15:15.186 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.444 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.702 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:15.702 15:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:16.636 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.893 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.459 00:15:17.459 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.459 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.459 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.716 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.716 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.717 { 00:15:17.717 "auth": { 00:15:17.717 "dhgroup": "ffdhe3072", 00:15:17.717 "digest": "sha512", 00:15:17.717 "state": "completed" 00:15:17.717 }, 00:15:17.717 "cntlid": 115, 00:15:17.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:17.717 "listen_address": { 00:15:17.717 "adrfam": "IPv4", 00:15:17.717 "traddr": "10.0.0.3", 00:15:17.717 "trsvcid": "4420", 00:15:17.717 "trtype": "TCP" 00:15:17.717 }, 00:15:17.717 "peer_address": { 00:15:17.717 "adrfam": "IPv4", 00:15:17.717 "traddr": "10.0.0.1", 00:15:17.717 "trsvcid": "50074", 00:15:17.717 "trtype": "TCP" 00:15:17.717 }, 00:15:17.717 "qid": 0, 00:15:17.717 "state": "enabled", 00:15:17.717 "thread": "nvmf_tgt_poll_group_000" 00:15:17.717 } 00:15:17.717 ]' 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.717 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.975 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.975 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.975 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.975 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.975 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.233 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:18.233 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.168 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.426 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.684 00:15:19.684 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.684 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.684 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.249 { 00:15:20.249 "auth": { 00:15:20.249 "dhgroup": "ffdhe3072", 00:15:20.249 "digest": "sha512", 00:15:20.249 "state": "completed" 00:15:20.249 }, 00:15:20.249 "cntlid": 117, 00:15:20.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:20.249 "listen_address": { 00:15:20.249 "adrfam": "IPv4", 00:15:20.249 "traddr": "10.0.0.3", 00:15:20.249 "trsvcid": "4420", 00:15:20.249 "trtype": "TCP" 00:15:20.249 }, 00:15:20.249 "peer_address": { 00:15:20.249 "adrfam": "IPv4", 00:15:20.249 "traddr": "10.0.0.1", 00:15:20.249 "trsvcid": "50096", 00:15:20.249 "trtype": "TCP" 00:15:20.249 }, 00:15:20.249 "qid": 0, 00:15:20.249 "state": "enabled", 00:15:20.249 "thread": "nvmf_tgt_poll_group_000" 00:15:20.249 } 00:15:20.249 ]' 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.249 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.507 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.507 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.507 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.765 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:20.765 15:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:21.330 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.588 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.846 15:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.104 00:15:22.362 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.362 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.362 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.620 { 00:15:22.620 "auth": { 00:15:22.620 "dhgroup": "ffdhe3072", 00:15:22.620 "digest": "sha512", 00:15:22.620 "state": "completed" 00:15:22.620 }, 00:15:22.620 "cntlid": 119, 00:15:22.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:22.620 "listen_address": { 00:15:22.620 "adrfam": "IPv4", 00:15:22.620 "traddr": "10.0.0.3", 00:15:22.620 "trsvcid": "4420", 00:15:22.620 "trtype": "TCP" 00:15:22.620 }, 00:15:22.620 "peer_address": { 00:15:22.620 "adrfam": "IPv4", 00:15:22.620 "traddr": "10.0.0.1", 00:15:22.620 "trsvcid": "50592", 00:15:22.620 "trtype": "TCP" 00:15:22.620 }, 00:15:22.620 "qid": 0, 00:15:22.620 "state": "enabled", 00:15:22.620 "thread": "nvmf_tgt_poll_group_000" 00:15:22.620 } 00:15:22.620 ]' 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.620 15:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.879 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:22.879 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:23.837 15:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.095 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.351 00:15:24.351 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.351 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.351 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.915 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.915 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.915 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.915 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.915 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.915 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.915 { 00:15:24.915 "auth": { 00:15:24.915 "dhgroup": "ffdhe4096", 00:15:24.915 "digest": "sha512", 00:15:24.915 "state": "completed" 00:15:24.915 }, 00:15:24.915 "cntlid": 121, 00:15:24.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:24.915 "listen_address": { 00:15:24.915 "adrfam": "IPv4", 00:15:24.915 "traddr": "10.0.0.3", 00:15:24.915 "trsvcid": "4420", 00:15:24.915 "trtype": "TCP" 00:15:24.915 }, 00:15:24.915 "peer_address": { 00:15:24.915 "adrfam": "IPv4", 00:15:24.915 "traddr": "10.0.0.1", 00:15:24.915 "trsvcid": "50622", 00:15:24.915 "trtype": "TCP" 00:15:24.915 }, 00:15:24.915 "qid": 0, 00:15:24.915 "state": "enabled", 00:15:24.915 "thread": "nvmf_tgt_poll_group_000" 00:15:24.915 } 00:15:24.915 ]' 00:15:24.916 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.916 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.916 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.916 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.916 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.916 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.916 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.916 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.479 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:25.479 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:26.042 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.299 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.863 00:15:26.864 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.864 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.864 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.122 { 00:15:27.122 "auth": { 00:15:27.122 "dhgroup": "ffdhe4096", 00:15:27.122 "digest": "sha512", 00:15:27.122 "state": "completed" 00:15:27.122 }, 00:15:27.122 "cntlid": 123, 00:15:27.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:27.122 "listen_address": { 00:15:27.122 "adrfam": "IPv4", 00:15:27.122 "traddr": "10.0.0.3", 00:15:27.122 "trsvcid": "4420", 00:15:27.122 "trtype": "TCP" 00:15:27.122 }, 00:15:27.122 "peer_address": { 00:15:27.122 "adrfam": "IPv4", 00:15:27.122 "traddr": "10.0.0.1", 00:15:27.122 "trsvcid": "50642", 00:15:27.122 "trtype": "TCP" 00:15:27.122 }, 00:15:27.122 "qid": 0, 00:15:27.122 "state": "enabled", 00:15:27.122 "thread": "nvmf_tgt_poll_group_000" 00:15:27.122 } 00:15:27.122 ]' 00:15:27.122 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.381 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.643 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:27.643 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.577 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.836 15:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.094 00:15:29.094 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.094 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.094 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.659 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.659 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.659 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.659 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.659 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.659 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.659 { 00:15:29.659 "auth": { 00:15:29.659 "dhgroup": "ffdhe4096", 00:15:29.659 "digest": "sha512", 00:15:29.659 "state": "completed" 00:15:29.659 }, 00:15:29.659 "cntlid": 125, 00:15:29.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:29.659 "listen_address": { 00:15:29.659 "adrfam": "IPv4", 00:15:29.659 "traddr": "10.0.0.3", 00:15:29.659 "trsvcid": "4420", 00:15:29.659 "trtype": "TCP" 00:15:29.659 }, 00:15:29.659 "peer_address": { 00:15:29.659 "adrfam": "IPv4", 00:15:29.659 "traddr": "10.0.0.1", 00:15:29.659 "trsvcid": "50672", 00:15:29.659 "trtype": "TCP" 00:15:29.659 }, 00:15:29.659 "qid": 0, 00:15:29.659 "state": "enabled", 00:15:29.659 "thread": "nvmf_tgt_poll_group_000" 00:15:29.659 } 00:15:29.660 ]' 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.660 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.918 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:29.918 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.852 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.111 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.112 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.112 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.689 00:15:31.689 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.689 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.689 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.948 { 00:15:31.948 "auth": { 00:15:31.948 "dhgroup": "ffdhe4096", 00:15:31.948 "digest": "sha512", 00:15:31.948 "state": "completed" 00:15:31.948 }, 00:15:31.948 "cntlid": 127, 00:15:31.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:31.948 "listen_address": { 00:15:31.948 "adrfam": "IPv4", 00:15:31.948 "traddr": "10.0.0.3", 00:15:31.948 "trsvcid": "4420", 00:15:31.948 "trtype": "TCP" 00:15:31.948 }, 00:15:31.948 "peer_address": { 00:15:31.948 "adrfam": "IPv4", 00:15:31.948 "traddr": "10.0.0.1", 00:15:31.948 "trsvcid": "50696", 00:15:31.948 "trtype": "TCP" 00:15:31.948 }, 00:15:31.948 "qid": 0, 00:15:31.948 "state": "enabled", 00:15:31.948 "thread": "nvmf_tgt_poll_group_000" 00:15:31.948 } 00:15:31.948 ]' 00:15:31.948 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.948 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.948 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.948 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.948 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.206 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.206 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.206 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.465 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:32.465 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:33.399 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.657 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.223 00:15:34.223 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.223 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.223 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.481 { 00:15:34.481 "auth": { 00:15:34.481 "dhgroup": "ffdhe6144", 00:15:34.481 "digest": "sha512", 00:15:34.481 "state": "completed" 00:15:34.481 }, 00:15:34.481 "cntlid": 129, 00:15:34.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:34.481 "listen_address": { 00:15:34.481 "adrfam": "IPv4", 00:15:34.481 "traddr": "10.0.0.3", 00:15:34.481 "trsvcid": "4420", 00:15:34.481 "trtype": "TCP" 00:15:34.481 }, 00:15:34.481 "peer_address": { 00:15:34.481 "adrfam": "IPv4", 00:15:34.481 "traddr": "10.0.0.1", 00:15:34.481 "trsvcid": "59904", 00:15:34.481 "trtype": "TCP" 00:15:34.481 }, 00:15:34.481 "qid": 0, 00:15:34.481 "state": "enabled", 00:15:34.481 "thread": "nvmf_tgt_poll_group_000" 00:15:34.481 } 00:15:34.481 ]' 00:15:34.481 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.740 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.999 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:34.999 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:35.934 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.194 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.761 00:15:36.761 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.761 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.761 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.019 { 00:15:37.019 "auth": { 00:15:37.019 "dhgroup": "ffdhe6144", 00:15:37.019 "digest": "sha512", 00:15:37.019 "state": "completed" 00:15:37.019 }, 00:15:37.019 "cntlid": 131, 00:15:37.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:37.019 "listen_address": { 00:15:37.019 "adrfam": "IPv4", 00:15:37.019 "traddr": "10.0.0.3", 00:15:37.019 "trsvcid": "4420", 00:15:37.019 "trtype": "TCP" 00:15:37.019 }, 00:15:37.019 "peer_address": { 00:15:37.019 "adrfam": "IPv4", 00:15:37.019 "traddr": "10.0.0.1", 00:15:37.019 "trsvcid": "59936", 00:15:37.019 "trtype": "TCP" 00:15:37.019 }, 00:15:37.019 "qid": 0, 00:15:37.019 "state": "enabled", 00:15:37.019 "thread": "nvmf_tgt_poll_group_000" 00:15:37.019 } 00:15:37.019 ]' 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.019 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.020 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.020 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.020 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.020 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.586 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:37.586 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.153 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.719 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.285 00:15:39.285 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.285 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.285 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.543 { 00:15:39.543 "auth": { 00:15:39.543 "dhgroup": "ffdhe6144", 00:15:39.543 "digest": "sha512", 00:15:39.543 "state": "completed" 00:15:39.543 }, 00:15:39.543 "cntlid": 133, 00:15:39.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:39.543 "listen_address": { 00:15:39.543 "adrfam": "IPv4", 00:15:39.543 "traddr": "10.0.0.3", 00:15:39.543 "trsvcid": "4420", 00:15:39.543 "trtype": "TCP" 00:15:39.543 }, 00:15:39.543 "peer_address": { 00:15:39.543 "adrfam": "IPv4", 00:15:39.543 "traddr": "10.0.0.1", 00:15:39.543 "trsvcid": "59954", 00:15:39.543 "trtype": "TCP" 00:15:39.543 }, 00:15:39.543 "qid": 0, 00:15:39.543 "state": "enabled", 00:15:39.543 "thread": "nvmf_tgt_poll_group_000" 00:15:39.543 } 00:15:39.543 ]' 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.543 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.801 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.801 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.801 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.063 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:40.063 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:40.631 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.197 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.764 00:15:41.764 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.764 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.764 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.022 { 00:15:42.022 "auth": { 00:15:42.022 "dhgroup": "ffdhe6144", 00:15:42.022 "digest": "sha512", 00:15:42.022 "state": "completed" 00:15:42.022 }, 00:15:42.022 "cntlid": 135, 00:15:42.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:42.022 "listen_address": { 00:15:42.022 "adrfam": "IPv4", 00:15:42.022 "traddr": "10.0.0.3", 00:15:42.022 "trsvcid": "4420", 00:15:42.022 "trtype": "TCP" 00:15:42.022 }, 00:15:42.022 "peer_address": { 00:15:42.022 "adrfam": "IPv4", 00:15:42.022 "traddr": "10.0.0.1", 00:15:42.022 "trsvcid": "59982", 00:15:42.022 "trtype": "TCP" 00:15:42.022 }, 00:15:42.022 "qid": 0, 00:15:42.022 "state": "enabled", 00:15:42.022 "thread": "nvmf_tgt_poll_group_000" 00:15:42.022 } 00:15:42.022 ]' 00:15:42.022 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.022 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.280 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:42.280 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:43.220 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.479 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.412 00:15:44.412 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.412 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.412 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.670 { 00:15:44.670 "auth": { 00:15:44.670 "dhgroup": "ffdhe8192", 00:15:44.670 "digest": "sha512", 00:15:44.670 "state": "completed" 00:15:44.670 }, 00:15:44.670 "cntlid": 137, 00:15:44.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:44.670 "listen_address": { 00:15:44.670 "adrfam": "IPv4", 00:15:44.670 "traddr": "10.0.0.3", 00:15:44.670 "trsvcid": "4420", 00:15:44.670 "trtype": "TCP" 00:15:44.670 }, 00:15:44.670 "peer_address": { 00:15:44.670 "adrfam": "IPv4", 00:15:44.670 "traddr": "10.0.0.1", 00:15:44.670 "trsvcid": "44524", 00:15:44.670 "trtype": "TCP" 00:15:44.670 }, 00:15:44.670 "qid": 0, 00:15:44.670 "state": "enabled", 00:15:44.670 "thread": "nvmf_tgt_poll_group_000" 00:15:44.670 } 00:15:44.670 ]' 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.670 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.671 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.236 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:45.236 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:45.803 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.061 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.062 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.062 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.062 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.062 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.062 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.062 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.024 00:15:47.024 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.024 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.024 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.282 { 00:15:47.282 "auth": { 00:15:47.282 "dhgroup": "ffdhe8192", 00:15:47.282 "digest": "sha512", 00:15:47.282 "state": "completed" 00:15:47.282 }, 00:15:47.282 "cntlid": 139, 00:15:47.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:47.282 "listen_address": { 00:15:47.282 "adrfam": "IPv4", 00:15:47.282 "traddr": "10.0.0.3", 00:15:47.282 "trsvcid": "4420", 00:15:47.282 "trtype": "TCP" 00:15:47.282 }, 00:15:47.282 "peer_address": { 00:15:47.282 "adrfam": "IPv4", 00:15:47.282 "traddr": "10.0.0.1", 00:15:47.282 "trsvcid": "44548", 00:15:47.282 "trtype": "TCP" 00:15:47.282 }, 00:15:47.282 "qid": 0, 00:15:47.282 "state": "enabled", 00:15:47.282 "thread": "nvmf_tgt_poll_group_000" 00:15:47.282 } 00:15:47.282 ]' 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.282 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.541 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:47.541 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: --dhchap-ctrl-secret DHHC-1:02:N2U3NTA5NjMzZjU3NTQ1NjE3NTZlYjZmOWJhYzAwOGY3NDdiZmNiYzRjNWI3MmFjVTP4Yg==: 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.475 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.735 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.671 00:15:49.671 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.671 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.671 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.929 { 00:15:49.929 "auth": { 00:15:49.929 "dhgroup": "ffdhe8192", 00:15:49.929 "digest": "sha512", 00:15:49.929 "state": "completed" 00:15:49.929 }, 00:15:49.929 "cntlid": 141, 00:15:49.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:49.929 "listen_address": { 00:15:49.929 "adrfam": "IPv4", 00:15:49.929 "traddr": "10.0.0.3", 00:15:49.929 "trsvcid": "4420", 00:15:49.929 "trtype": "TCP" 00:15:49.929 }, 00:15:49.929 "peer_address": { 00:15:49.929 "adrfam": "IPv4", 00:15:49.929 "traddr": "10.0.0.1", 00:15:49.929 "trsvcid": "44580", 00:15:49.929 "trtype": "TCP" 00:15:49.929 }, 00:15:49.929 "qid": 0, 00:15:49.929 "state": "enabled", 00:15:49.929 "thread": "nvmf_tgt_poll_group_000" 00:15:49.929 } 00:15:49.929 ]' 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.929 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.929 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.929 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.929 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.227 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:50.227 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYWZhNGFlNTY2OGU1MjQxYWZiZjBlNTU4ZWQyNTZuXs/4: 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:51.206 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.464 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.031 00:15:52.031 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.031 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.031 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.595 { 00:15:52.595 "auth": { 00:15:52.595 "dhgroup": "ffdhe8192", 00:15:52.595 "digest": "sha512", 00:15:52.595 "state": "completed" 00:15:52.595 }, 00:15:52.595 "cntlid": 143, 00:15:52.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:52.595 "listen_address": { 00:15:52.595 "adrfam": "IPv4", 00:15:52.595 "traddr": "10.0.0.3", 00:15:52.595 "trsvcid": "4420", 00:15:52.595 "trtype": "TCP" 00:15:52.595 }, 00:15:52.595 "peer_address": { 00:15:52.595 "adrfam": "IPv4", 00:15:52.595 "traddr": "10.0.0.1", 00:15:52.595 "trsvcid": "44602", 00:15:52.595 "trtype": "TCP" 00:15:52.595 }, 00:15:52.595 "qid": 0, 00:15:52.595 "state": "enabled", 00:15:52.595 "thread": "nvmf_tgt_poll_group_000" 00:15:52.595 } 00:15:52.595 ]' 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.595 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.852 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:52.852 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:53.783 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.041 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.605 00:15:54.862 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.862 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.862 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.119 { 00:15:55.119 "auth": { 00:15:55.119 "dhgroup": "ffdhe8192", 00:15:55.119 "digest": "sha512", 00:15:55.119 "state": "completed" 00:15:55.119 }, 00:15:55.119 "cntlid": 145, 00:15:55.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:55.119 "listen_address": { 00:15:55.119 "adrfam": "IPv4", 00:15:55.119 "traddr": "10.0.0.3", 00:15:55.119 "trsvcid": "4420", 00:15:55.119 "trtype": "TCP" 00:15:55.119 }, 00:15:55.119 "peer_address": { 00:15:55.119 "adrfam": "IPv4", 00:15:55.119 "traddr": "10.0.0.1", 00:15:55.119 "trsvcid": "49346", 00:15:55.119 "trtype": "TCP" 00:15:55.119 }, 00:15:55.119 "qid": 0, 00:15:55.119 "state": "enabled", 00:15:55.119 "thread": "nvmf_tgt_poll_group_000" 00:15:55.119 } 00:15:55.119 ]' 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.119 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.376 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.376 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.376 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.634 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:55.634 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:00:MjI4MDBkMDcyYzllNDQzMjIzMDVmN2VkZjg2NDg2MmNiNGRkNmI1MmI2MzQzNTM3DSIVTg==: --dhchap-ctrl-secret DHHC-1:03:OGJhZmRmN2VkMDg0M2JjZmMzNjQzOWE4OTJkM2Y3MGFhYjFjYjEwNmVmNTk1NGMzNDM3YmJiYzUyYzc2NjY2MJSkxbI=: 00:15:56.565 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.565 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:56.565 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.565 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.565 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.565 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:56.566 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:57.498 2024/10/01 15:28:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:57.498 request: 00:15:57.498 { 00:15:57.498 "method": "bdev_nvme_attach_controller", 00:15:57.498 "params": { 00:15:57.498 "name": "nvme0", 00:15:57.498 "trtype": "tcp", 00:15:57.498 "traddr": "10.0.0.3", 00:15:57.498 "adrfam": "ipv4", 00:15:57.498 "trsvcid": "4420", 00:15:57.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:57.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:57.498 "prchk_reftag": false, 00:15:57.498 "prchk_guard": false, 00:15:57.498 "hdgst": false, 00:15:57.498 "ddgst": false, 00:15:57.498 "dhchap_key": "key2", 00:15:57.498 "allow_unrecognized_csi": false 00:15:57.498 } 00:15:57.498 } 00:15:57.498 Got JSON-RPC error response 00:15:57.498 GoRPCClient: error on JSON-RPC call 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.498 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:58.063 2024/10/01 15:28:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:58.063 request: 00:15:58.063 { 00:15:58.063 "method": "bdev_nvme_attach_controller", 00:15:58.063 "params": { 00:15:58.063 "name": "nvme0", 00:15:58.063 "trtype": "tcp", 00:15:58.063 "traddr": "10.0.0.3", 00:15:58.063 "adrfam": "ipv4", 00:15:58.063 "trsvcid": "4420", 00:15:58.063 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:58.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:58.063 "prchk_reftag": false, 00:15:58.063 "prchk_guard": false, 00:15:58.063 "hdgst": false, 00:15:58.063 "ddgst": false, 00:15:58.063 "dhchap_key": "key1", 00:15:58.063 "dhchap_ctrlr_key": "ckey2", 00:15:58.063 "allow_unrecognized_csi": false 00:15:58.063 } 00:15:58.063 } 00:15:58.063 Got JSON-RPC error response 00:15:58.063 GoRPCClient: error on JSON-RPC call 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.063 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.064 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.064 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.995 2024/10/01 15:28:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:58.995 request: 00:15:58.995 { 00:15:58.995 "method": "bdev_nvme_attach_controller", 00:15:58.995 "params": { 00:15:58.995 "name": "nvme0", 00:15:58.995 "trtype": "tcp", 00:15:58.995 "traddr": "10.0.0.3", 00:15:58.995 "adrfam": "ipv4", 00:15:58.995 "trsvcid": "4420", 00:15:58.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:58.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:15:58.995 "prchk_reftag": false, 00:15:58.995 "prchk_guard": false, 00:15:58.995 "hdgst": false, 00:15:58.995 "ddgst": false, 00:15:58.995 "dhchap_key": "key1", 00:15:58.995 "dhchap_ctrlr_key": "ckey1", 00:15:58.995 "allow_unrecognized_csi": false 00:15:58.995 } 00:15:58.995 } 00:15:58.995 Got JSON-RPC error response 00:15:58.995 GoRPCClient: error on JSON-RPC call 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76439 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 76439 ']' 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 76439 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76439 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:58.995 killing process with pid 76439 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76439' 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 76439 00:15:58.995 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 76439 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=81564 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 81564 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81564 ']' 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.995 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.253 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.253 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:59.253 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:59.253 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.253 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.510 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.510 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:59.510 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81564 00:15:59.510 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81564 ']' 00:15:59.510 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.510 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.511 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.511 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.511 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 null0 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RZV 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.5Af ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Af 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.n1j 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.qs7 ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qs7 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mWY 00:15:59.768 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1Tq ]] 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Tq 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3Dx 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.769 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.028 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.000 nvme0n1 00:16:01.000 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.000 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.000 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.566 { 00:16:01.566 "auth": { 00:16:01.566 "dhgroup": "ffdhe8192", 00:16:01.566 "digest": "sha512", 00:16:01.566 "state": "completed" 00:16:01.566 }, 00:16:01.566 "cntlid": 1, 00:16:01.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:16:01.566 "listen_address": { 00:16:01.566 "adrfam": "IPv4", 00:16:01.566 "traddr": "10.0.0.3", 00:16:01.566 "trsvcid": "4420", 00:16:01.566 "trtype": "TCP" 00:16:01.566 }, 00:16:01.566 "peer_address": { 00:16:01.566 "adrfam": "IPv4", 00:16:01.566 "traddr": "10.0.0.1", 00:16:01.566 "trsvcid": "49404", 00:16:01.566 "trtype": "TCP" 00:16:01.566 }, 00:16:01.566 "qid": 0, 00:16:01.566 "state": "enabled", 00:16:01.566 "thread": "nvmf_tgt_poll_group_000" 00:16:01.566 } 00:16:01.566 ]' 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.566 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.132 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:16:02.132 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key3 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.697 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.954 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.954 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:02.954 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.212 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.470 2024/10/01 15:29:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:03.470 request: 00:16:03.470 { 00:16:03.470 "method": "bdev_nvme_attach_controller", 00:16:03.470 "params": { 00:16:03.470 "name": "nvme0", 00:16:03.470 "trtype": "tcp", 00:16:03.470 "traddr": "10.0.0.3", 00:16:03.470 "adrfam": "ipv4", 00:16:03.470 "trsvcid": "4420", 00:16:03.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:03.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:16:03.470 "prchk_reftag": false, 00:16:03.470 "prchk_guard": false, 00:16:03.470 "hdgst": false, 00:16:03.470 "ddgst": false, 00:16:03.470 "dhchap_key": "key3", 00:16:03.470 "allow_unrecognized_csi": false 00:16:03.470 } 00:16:03.470 } 00:16:03.470 Got JSON-RPC error response 00:16:03.470 GoRPCClient: error on JSON-RPC call 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:03.728 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.986 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.244 2024/10/01 15:29:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:04.244 request: 00:16:04.244 { 00:16:04.244 "method": "bdev_nvme_attach_controller", 00:16:04.244 "params": { 00:16:04.244 "name": "nvme0", 00:16:04.244 "trtype": "tcp", 00:16:04.244 "traddr": "10.0.0.3", 00:16:04.244 "adrfam": "ipv4", 00:16:04.244 "trsvcid": "4420", 00:16:04.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:04.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:16:04.244 "prchk_reftag": false, 00:16:04.244 "prchk_guard": false, 00:16:04.244 "hdgst": false, 00:16:04.244 "ddgst": false, 00:16:04.244 "dhchap_key": "key3", 00:16:04.244 "allow_unrecognized_csi": false 00:16:04.244 } 00:16:04.244 } 00:16:04.244 Got JSON-RPC error response 00:16:04.244 GoRPCClient: error on JSON-RPC call 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:04.244 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:04.245 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.811 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:05.069 2024/10/01 15:29:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:05.069 request: 00:16:05.069 { 00:16:05.069 "method": "bdev_nvme_attach_controller", 00:16:05.069 "params": { 00:16:05.069 "name": "nvme0", 00:16:05.069 "trtype": "tcp", 00:16:05.069 "traddr": "10.0.0.3", 00:16:05.069 "adrfam": "ipv4", 00:16:05.069 "trsvcid": "4420", 00:16:05.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:05.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:16:05.069 "prchk_reftag": false, 00:16:05.069 "prchk_guard": false, 00:16:05.069 "hdgst": false, 00:16:05.069 "ddgst": false, 00:16:05.069 "dhchap_key": "key0", 00:16:05.069 "dhchap_ctrlr_key": "key1", 00:16:05.069 "allow_unrecognized_csi": false 00:16:05.069 } 00:16:05.069 } 00:16:05.069 Got JSON-RPC error response 00:16:05.069 GoRPCClient: error on JSON-RPC call 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:05.069 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:05.636 nvme0n1 00:16:05.636 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:05.636 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:05.636 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.894 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.894 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.894 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:06.152 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:07.580 nvme0n1 00:16:07.580 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:07.580 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:07.580 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:07.838 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.097 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.097 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:16:08.097 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --hostid 425da7d6-2e40-4e0d-b2ef-fba0474bdabf -l 0 --dhchap-secret DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: --dhchap-ctrl-secret DHHC-1:03:NWNlMjM0NzRhYTlmYjEwNjIxYmMwMDhmNDhjYTg5M2M0YTZjZDQ3ODZkMDQ4NGM4OWViNDMwNzA3ZWFiZTZhZnv7O+k=: 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.031 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:09.290 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:09.856 2024/10/01 15:29:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:09.856 request: 00:16:09.856 { 00:16:09.856 "method": "bdev_nvme_attach_controller", 00:16:09.856 "params": { 00:16:09.856 "name": "nvme0", 00:16:09.856 "trtype": "tcp", 00:16:09.856 "traddr": "10.0.0.3", 00:16:09.856 "adrfam": "ipv4", 00:16:09.856 "trsvcid": "4420", 00:16:09.856 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:09.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf", 00:16:09.856 "prchk_reftag": false, 00:16:09.856 "prchk_guard": false, 00:16:09.856 "hdgst": false, 00:16:09.856 "ddgst": false, 00:16:09.856 "dhchap_key": "key1", 00:16:09.856 "allow_unrecognized_csi": false 00:16:09.856 } 00:16:09.856 } 00:16:09.856 Got JSON-RPC error response 00:16:09.856 GoRPCClient: error on JSON-RPC call 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:09.856 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:11.231 nvme0n1 00:16:11.231 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:11.231 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.231 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:11.489 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.489 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.489 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:11.748 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:12.314 nvme0n1 00:16:12.314 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:12.314 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.314 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:12.572 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.572 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.572 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: '' 2s 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: ]] 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGU5NDkyOTFiNDRmMmYwOWNjNDFhZWQ5NGE5ODdmYzXO8ScW: 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:13.138 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: 2s 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:15.041 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: ]] 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDI2NWEzNDdmNWI5YjdkNjUzNTU0NGY1MTdjZGViZGJhZWIzOWUyNWVkYWUyZDhl7j1xBw==: 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:15.042 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:17.612 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:17.612 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:16:17.612 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:16:17.612 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:17.613 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:18.202 nvme0n1 00:16:18.202 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:18.202 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.202 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.202 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.202 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:18.202 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:19.137 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:19.137 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:19.137 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:19.396 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:19.654 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:19.654 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:19.654 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:20.219 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:20.784 2024/10/01 15:29:19 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:16:20.784 request: 00:16:20.784 { 00:16:20.784 "method": "bdev_nvme_set_keys", 00:16:20.784 "params": { 00:16:20.784 "name": "nvme0", 00:16:20.784 "dhchap_key": "key1", 00:16:20.784 "dhchap_ctrlr_key": "key3" 00:16:20.784 } 00:16:20.784 } 00:16:20.784 Got JSON-RPC error response 00:16:20.784 GoRPCClient: error on JSON-RPC call 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.784 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:21.042 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:21.042 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:22.414 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:23.788 nvme0n1 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:23.788 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:24.354 2024/10/01 15:29:23 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:16:24.354 request: 00:16:24.354 { 00:16:24.354 "method": "bdev_nvme_set_keys", 00:16:24.354 "params": { 00:16:24.354 "name": "nvme0", 00:16:24.354 "dhchap_key": "key2", 00:16:24.354 "dhchap_ctrlr_key": "key0" 00:16:24.354 } 00:16:24.354 } 00:16:24.354 Got JSON-RPC error response 00:16:24.354 GoRPCClient: error on JSON-RPC call 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:24.354 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.613 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:24.613 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:25.547 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:25.547 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.547 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76469 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 76469 ']' 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 76469 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.806 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76469 00:16:26.065 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:26.065 killing process with pid 76469 00:16:26.065 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:26.065 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76469' 00:16:26.065 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 76469 00:16:26.065 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 76469 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.322 rmmod nvme_tcp 00:16:26.322 rmmod nvme_fabrics 00:16:26.322 rmmod nvme_keyring 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 81564 ']' 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 81564 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 81564 ']' 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 81564 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81564 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.322 killing process with pid 81564 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81564' 00:16:26.322 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 81564 00:16:26.323 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 81564 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:26.581 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RZV /tmp/spdk.key-sha256.n1j /tmp/spdk.key-sha384.mWY /tmp/spdk.key-sha512.3Dx /tmp/spdk.key-sha512.5Af /tmp/spdk.key-sha384.qs7 /tmp/spdk.key-sha256.1Tq '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:26.839 00:16:26.839 real 3m36.471s 00:16:26.839 user 8m48.519s 00:16:26.839 sys 0m24.925s 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.839 ************************************ 00:16:26.839 END TEST nvmf_auth_target 00:16:26.839 ************************************ 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.839 ************************************ 00:16:26.839 START TEST nvmf_bdevio_no_huge 00:16:26.839 ************************************ 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:26.839 * Looking for test storage... 00:16:26.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:16:26.839 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:27.098 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:27.098 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.099 --rc genhtml_branch_coverage=1 00:16:27.099 --rc genhtml_function_coverage=1 00:16:27.099 --rc genhtml_legend=1 00:16:27.099 --rc geninfo_all_blocks=1 00:16:27.099 --rc geninfo_unexecuted_blocks=1 00:16:27.099 00:16:27.099 ' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.099 --rc genhtml_branch_coverage=1 00:16:27.099 --rc genhtml_function_coverage=1 00:16:27.099 --rc genhtml_legend=1 00:16:27.099 --rc geninfo_all_blocks=1 00:16:27.099 --rc geninfo_unexecuted_blocks=1 00:16:27.099 00:16:27.099 ' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.099 --rc genhtml_branch_coverage=1 00:16:27.099 --rc genhtml_function_coverage=1 00:16:27.099 --rc genhtml_legend=1 00:16:27.099 --rc geninfo_all_blocks=1 00:16:27.099 --rc geninfo_unexecuted_blocks=1 00:16:27.099 00:16:27.099 ' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.099 --rc genhtml_branch_coverage=1 00:16:27.099 --rc genhtml_function_coverage=1 00:16:27.099 --rc genhtml_legend=1 00:16:27.099 --rc geninfo_all_blocks=1 00:16:27.099 --rc geninfo_unexecuted_blocks=1 00:16:27.099 00:16:27.099 ' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.099 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:27.100 Cannot find device "nvmf_init_br" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:27.100 Cannot find device "nvmf_init_br2" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:27.100 Cannot find device "nvmf_tgt_br" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.100 Cannot find device "nvmf_tgt_br2" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:27.100 Cannot find device "nvmf_init_br" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:27.100 Cannot find device "nvmf_init_br2" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:27.100 Cannot find device "nvmf_tgt_br" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:27.100 Cannot find device "nvmf_tgt_br2" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:27.100 Cannot find device "nvmf_br" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:27.100 Cannot find device "nvmf_init_if" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:27.100 Cannot find device "nvmf_init_if2" 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.100 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:27.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:16:27.360 00:16:27.360 --- 10.0.0.3 ping statistics --- 00:16:27.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.360 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:27.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:27.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:16:27.360 00:16:27.360 --- 10.0.0.4 ping statistics --- 00:16:27.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.360 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:27.360 00:16:27.360 --- 10.0.0.1 ping statistics --- 00:16:27.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.360 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:27.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:16:27.360 00:16:27.360 --- 10.0.0.2 ping statistics --- 00:16:27.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.360 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.360 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=82449 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 82449 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82449 ']' 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.361 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.620 [2024-10-01 15:29:26.578404] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:16:27.620 [2024-10-01 15:29:26.578530] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:27.620 [2024-10-01 15:29:26.730179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.877 [2024-10-01 15:29:26.868132] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.877 [2024-10-01 15:29:26.868211] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.877 [2024-10-01 15:29:26.868225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.878 [2024-10-01 15:29:26.868235] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.878 [2024-10-01 15:29:26.868252] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.878 [2024-10-01 15:29:26.868414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:16:27.878 [2024-10-01 15:29:26.868531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:16:27.878 [2024-10-01 15:29:26.869042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:16:27.878 [2024-10-01 15:29:26.869055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.809 [2024-10-01 15:29:27.678831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.809 Malloc0 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.809 [2024-10-01 15:29:27.721999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:16:28.809 { 00:16:28.809 "params": { 00:16:28.809 "name": "Nvme$subsystem", 00:16:28.809 "trtype": "$TEST_TRANSPORT", 00:16:28.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.809 "adrfam": "ipv4", 00:16:28.809 "trsvcid": "$NVMF_PORT", 00:16:28.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.809 "hdgst": ${hdgst:-false}, 00:16:28.809 "ddgst": ${ddgst:-false} 00:16:28.809 }, 00:16:28.809 "method": "bdev_nvme_attach_controller" 00:16:28.809 } 00:16:28.809 EOF 00:16:28.809 )") 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:16:28.809 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:16:28.809 "params": { 00:16:28.809 "name": "Nvme1", 00:16:28.809 "trtype": "tcp", 00:16:28.809 "traddr": "10.0.0.3", 00:16:28.809 "adrfam": "ipv4", 00:16:28.809 "trsvcid": "4420", 00:16:28.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.809 "hdgst": false, 00:16:28.809 "ddgst": false 00:16:28.809 }, 00:16:28.809 "method": "bdev_nvme_attach_controller" 00:16:28.810 }' 00:16:28.810 [2024-10-01 15:29:27.786597] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:16:28.810 [2024-10-01 15:29:27.786712] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82503 ] 00:16:28.810 [2024-10-01 15:29:27.932069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:29.066 [2024-10-01 15:29:28.088298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.066 [2024-10-01 15:29:28.088456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.066 [2024-10-01 15:29:28.088479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.324 I/O targets: 00:16:29.324 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:29.324 00:16:29.324 00:16:29.324 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.324 http://cunit.sourceforge.net/ 00:16:29.324 00:16:29.324 00:16:29.324 Suite: bdevio tests on: Nvme1n1 00:16:29.324 Test: blockdev write read block ...passed 00:16:29.324 Test: blockdev write zeroes read block ...passed 00:16:29.324 Test: blockdev write zeroes read no split ...passed 00:16:29.324 Test: blockdev write zeroes read split ...passed 00:16:29.324 Test: blockdev write zeroes read split partial ...passed 00:16:29.324 Test: blockdev reset ...[2024-10-01 15:29:28.470560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:29.324 [2024-10-01 15:29:28.470692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x551450 (9): Bad file descriptor 00:16:29.324 [2024-10-01 15:29:28.490432] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:29.324 passed 00:16:29.324 Test: blockdev write read 8 blocks ...passed 00:16:29.582 Test: blockdev write read size > 128k ...passed 00:16:29.582 Test: blockdev write read invalid size ...passed 00:16:29.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.582 Test: blockdev write read max offset ...passed 00:16:29.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.582 Test: blockdev writev readv 8 blocks ...passed 00:16:29.582 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.582 Test: blockdev writev readv block ...passed 00:16:29.582 Test: blockdev writev readv size > 128k ...passed 00:16:29.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.582 Test: blockdev comparev and writev ...[2024-10-01 15:29:28.665329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.665670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.665809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.665935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.666357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.666521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.666643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.666769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.667193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.667313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.667438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.667550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.668067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.668200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.582 [2024-10-01 15:29:28.668314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.582 [2024-10-01 15:29:28.668413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.582 passed 00:16:29.840 Test: blockdev nvme passthru rw ...passed 00:16:29.840 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:29:28.752786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.840 [2024-10-01 15:29:28.753013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.840 [2024-10-01 15:29:28.753236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.840 [2024-10-01 15:29:28.753351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.840 [2024-10-01 15:29:28.753599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.840 [2024-10-01 15:29:28.753715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.840 [2024-10-01 15:29:28.753906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.840 [2024-10-01 15:29:28.754006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.840 passed 00:16:29.840 Test: blockdev nvme admin passthru ...passed 00:16:29.840 Test: blockdev copy ...passed 00:16:29.840 00:16:29.840 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.840 suites 1 1 n/a 0 0 00:16:29.840 tests 23 23 23 0 0 00:16:29.840 asserts 152 152 152 0 n/a 00:16:29.840 00:16:29.840 Elapsed time = 0.946 seconds 00:16:30.097 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.097 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.097 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:30.098 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:30.354 rmmod nvme_tcp 00:16:30.354 rmmod nvme_fabrics 00:16:30.354 rmmod nvme_keyring 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 82449 ']' 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 82449 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82449 ']' 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82449 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82449 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:30.354 killing process with pid 82449 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82449' 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82449 00:16:30.354 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82449 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:30.611 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:16:30.870 00:16:30.870 real 0m4.094s 00:16:30.870 user 0m13.761s 00:16:30.870 sys 0m1.559s 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.870 ************************************ 00:16:30.870 END TEST nvmf_bdevio_no_huge 00:16:30.870 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:30.870 ************************************ 00:16:30.870 15:29:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:30.870 15:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.870 15:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.870 15:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.870 ************************************ 00:16:30.870 START TEST nvmf_tls 00:16:30.870 ************************************ 00:16:30.870 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:31.127 * Looking for test storage... 00:16:31.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:31.127 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:31.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.128 --rc genhtml_branch_coverage=1 00:16:31.128 --rc genhtml_function_coverage=1 00:16:31.128 --rc genhtml_legend=1 00:16:31.128 --rc geninfo_all_blocks=1 00:16:31.128 --rc geninfo_unexecuted_blocks=1 00:16:31.128 00:16:31.128 ' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:31.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.128 --rc genhtml_branch_coverage=1 00:16:31.128 --rc genhtml_function_coverage=1 00:16:31.128 --rc genhtml_legend=1 00:16:31.128 --rc geninfo_all_blocks=1 00:16:31.128 --rc geninfo_unexecuted_blocks=1 00:16:31.128 00:16:31.128 ' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:31.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.128 --rc genhtml_branch_coverage=1 00:16:31.128 --rc genhtml_function_coverage=1 00:16:31.128 --rc genhtml_legend=1 00:16:31.128 --rc geninfo_all_blocks=1 00:16:31.128 --rc geninfo_unexecuted_blocks=1 00:16:31.128 00:16:31.128 ' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:31.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.128 --rc genhtml_branch_coverage=1 00:16:31.128 --rc genhtml_function_coverage=1 00:16:31.128 --rc genhtml_legend=1 00:16:31.128 --rc geninfo_all_blocks=1 00:16:31.128 --rc geninfo_unexecuted_blocks=1 00:16:31.128 00:16:31.128 ' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.128 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:31.128 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:31.129 Cannot find device "nvmf_init_br" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:31.129 Cannot find device "nvmf_init_br2" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:31.129 Cannot find device "nvmf_tgt_br" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.129 Cannot find device "nvmf_tgt_br2" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:31.129 Cannot find device "nvmf_init_br" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:31.129 Cannot find device "nvmf_init_br2" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:31.129 Cannot find device "nvmf_tgt_br" 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:16:31.129 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:31.387 Cannot find device "nvmf_tgt_br2" 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:31.387 Cannot find device "nvmf_br" 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:31.387 Cannot find device "nvmf_init_if" 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:31.387 Cannot find device "nvmf_init_if2" 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.387 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:31.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:31.646 00:16:31.646 --- 10.0.0.3 ping statistics --- 00:16:31.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.646 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:31.646 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:31.646 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:16:31.646 00:16:31.646 --- 10.0.0.4 ping statistics --- 00:16:31.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.646 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:31.646 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:31.646 00:16:31.646 --- 10.0.0.1 ping statistics --- 00:16:31.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.647 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:31.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:16:31.647 00:16:31.647 --- 10.0.0.2 ping statistics --- 00:16:31.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.647 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=82753 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 82753 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82753 ']' 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.647 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.647 [2024-10-01 15:29:30.696918] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:16:31.647 [2024-10-01 15:29:30.697043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.905 [2024-10-01 15:29:30.843543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.905 [2024-10-01 15:29:30.913508] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.905 [2024-10-01 15:29:30.913576] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.905 [2024-10-01 15:29:30.913591] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.905 [2024-10-01 15:29:30.913601] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.905 [2024-10-01 15:29:30.913611] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.905 [2024-10-01 15:29:30.913654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:32.837 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:33.093 true 00:16:33.093 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.094 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:33.351 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:33.351 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:33.351 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:33.609 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.609 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:33.867 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:33.867 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:33.867 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:34.433 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.433 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:34.691 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:34.691 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:34.691 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.691 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:34.950 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:34.950 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:34.950 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:35.219 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:35.219 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:35.476 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:35.476 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:35.476 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:35.734 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:35.734 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.y7QzmDX9YZ 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.2rjyoiDZhp 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.y7QzmDX9YZ 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.2rjyoiDZhp 00:16:36.300 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:36.559 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:37.125 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.y7QzmDX9YZ 00:16:37.125 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.y7QzmDX9YZ 00:16:37.125 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:37.125 [2024-10-01 15:29:36.269874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.383 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:37.641 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:37.899 [2024-10-01 15:29:36.930033] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:37.899 [2024-10-01 15:29:36.930271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:37.899 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:38.163 malloc0 00:16:38.163 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:38.446 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.y7QzmDX9YZ 00:16:39.012 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:39.270 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.y7QzmDX9YZ 00:16:51.505 Initializing NVMe Controllers 00:16:51.505 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.505 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:51.505 Initialization complete. Launching workers. 00:16:51.505 ======================================================== 00:16:51.505 Latency(us) 00:16:51.505 Device Information : IOPS MiB/s Average min max 00:16:51.505 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9071.98 35.44 7056.51 1462.64 11823.85 00:16:51.505 ======================================================== 00:16:51.505 Total : 9071.98 35.44 7056.51 1462.64 11823.85 00:16:51.505 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7QzmDX9YZ 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y7QzmDX9YZ 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83140 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83140 /var/tmp/bdevperf.sock 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83140 ']' 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.505 [2024-10-01 15:29:48.528626] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:16:51.505 [2024-10-01 15:29:48.528746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83140 ] 00:16:51.505 [2024-10-01 15:29:48.667213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.505 [2024-10-01 15:29:48.767792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:51.505 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y7QzmDX9YZ 00:16:51.505 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:51.505 [2024-10-01 15:29:49.472785] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.505 TLSTESTn1 00:16:51.505 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:51.505 Running I/O for 10 seconds... 00:17:00.763 3776.00 IOPS, 14.75 MiB/s 3828.50 IOPS, 14.96 MiB/s 3854.33 IOPS, 15.06 MiB/s 3876.00 IOPS, 15.14 MiB/s 3883.20 IOPS, 15.17 MiB/s 3869.83 IOPS, 15.12 MiB/s 3873.29 IOPS, 15.13 MiB/s 3879.25 IOPS, 15.15 MiB/s 3879.44 IOPS, 15.15 MiB/s 3876.50 IOPS, 15.14 MiB/s 00:17:00.763 Latency(us) 00:17:00.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.763 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:00.763 Verification LBA range: start 0x0 length 0x2000 00:17:00.763 TLSTESTn1 : 10.02 3879.90 15.16 0.00 0.00 32917.80 9830.40 30504.03 00:17:00.763 =================================================================================================================== 00:17:00.763 Total : 3879.90 15.16 0.00 0.00 32917.80 9830.40 30504.03 00:17:00.763 { 00:17:00.763 "results": [ 00:17:00.763 { 00:17:00.763 "job": "TLSTESTn1", 00:17:00.763 "core_mask": "0x4", 00:17:00.763 "workload": "verify", 00:17:00.763 "status": "finished", 00:17:00.763 "verify_range": { 00:17:00.763 "start": 0, 00:17:00.763 "length": 8192 00:17:00.763 }, 00:17:00.763 "queue_depth": 128, 00:17:00.763 "io_size": 4096, 00:17:00.763 "runtime": 10.023703, 00:17:00.763 "iops": 3879.9034648173433, 00:17:00.763 "mibps": 15.155872909442747, 00:17:00.763 "io_failed": 0, 00:17:00.763 "io_timeout": 0, 00:17:00.763 "avg_latency_us": 32917.797972047745, 00:17:00.763 "min_latency_us": 9830.4, 00:17:00.763 "max_latency_us": 30504.02909090909 00:17:00.763 } 00:17:00.763 ], 00:17:00.763 "core_count": 1 00:17:00.763 } 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83140 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83140 ']' 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83140 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83140 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:00.764 killing process with pid 83140 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83140' 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83140 00:17:00.764 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.764 00:17:00.764 Latency(us) 00:17:00.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.764 =================================================================================================================== 00:17:00.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83140 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2rjyoiDZhp 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2rjyoiDZhp 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2rjyoiDZhp 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2rjyoiDZhp 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83281 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83281 /var/tmp/bdevperf.sock 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83281 ']' 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.764 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.021 [2024-10-01 15:29:59.998594] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:01.021 [2024-10-01 15:29:59.999455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83281 ] 00:17:01.021 [2024-10-01 15:30:00.154413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.279 [2024-10-01 15:30:00.226455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.279 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.279 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:01.279 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2rjyoiDZhp 00:17:01.536 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:01.795 [2024-10-01 15:30:00.905035] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.795 [2024-10-01 15:30:00.914678] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:01.795 [2024-10-01 15:30:00.915666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0eb80 (107): Transport endpoint is not connected 00:17:01.795 [2024-10-01 15:30:00.916648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0eb80 (9): Bad file descriptor 00:17:01.795 [2024-10-01 15:30:00.917645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:01.795 [2024-10-01 15:30:00.917674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:01.795 [2024-10-01 15:30:00.917685] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:01.795 [2024-10-01 15:30:00.917696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:01.795 2024/10/01 15:30:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:01.795 request: 00:17:01.795 { 00:17:01.795 "method": "bdev_nvme_attach_controller", 00:17:01.795 "params": { 00:17:01.795 "name": "TLSTEST", 00:17:01.795 "trtype": "tcp", 00:17:01.795 "traddr": "10.0.0.3", 00:17:01.795 "adrfam": "ipv4", 00:17:01.795 "trsvcid": "4420", 00:17:01.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.795 "prchk_reftag": false, 00:17:01.795 "prchk_guard": false, 00:17:01.795 "hdgst": false, 00:17:01.795 "ddgst": false, 00:17:01.795 "psk": "key0", 00:17:01.795 "allow_unrecognized_csi": false 00:17:01.795 } 00:17:01.795 } 00:17:01.795 Got JSON-RPC error response 00:17:01.795 GoRPCClient: error on JSON-RPC call 00:17:01.795 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83281 00:17:01.795 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83281 ']' 00:17:01.795 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83281 00:17:01.795 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:01.795 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.795 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83281 00:17:02.053 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:02.053 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:02.053 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83281' 00:17:02.053 killing process with pid 83281 00:17:02.053 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.053 00:17:02.053 Latency(us) 00:17:02.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.053 =================================================================================================================== 00:17:02.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.053 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83281 00:17:02.053 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83281 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.y7QzmDX9YZ 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.y7QzmDX9YZ 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.y7QzmDX9YZ 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y7QzmDX9YZ 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83323 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83323 /var/tmp/bdevperf.sock 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83323 ']' 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.053 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.053 [2024-10-01 15:30:01.183500] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:02.053 [2024-10-01 15:30:01.183607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83323 ] 00:17:02.312 [2024-10-01 15:30:01.321266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.312 [2024-10-01 15:30:01.393006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.312 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.312 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:02.312 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y7QzmDX9YZ 00:17:02.875 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:03.133 [2024-10-01 15:30:02.044800] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.133 [2024-10-01 15:30:02.054905] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:03.133 [2024-10-01 15:30:02.054963] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:03.133 [2024-10-01 15:30:02.055031] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:03.133 [2024-10-01 15:30:02.055618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b7b80 (107): Transport endpoint is not connected 00:17:03.133 [2024-10-01 15:30:02.056607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b7b80 (9): Bad file descriptor 00:17:03.133 [2024-10-01 15:30:02.057603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:03.133 [2024-10-01 15:30:02.057634] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:03.133 [2024-10-01 15:30:02.057646] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:03.134 [2024-10-01 15:30:02.057658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.134 2024/10/01 15:30:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:03.134 request: 00:17:03.134 { 00:17:03.134 "method": "bdev_nvme_attach_controller", 00:17:03.134 "params": { 00:17:03.134 "name": "TLSTEST", 00:17:03.134 "trtype": "tcp", 00:17:03.134 "traddr": "10.0.0.3", 00:17:03.134 "adrfam": "ipv4", 00:17:03.134 "trsvcid": "4420", 00:17:03.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.134 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:03.134 "prchk_reftag": false, 00:17:03.134 "prchk_guard": false, 00:17:03.134 "hdgst": false, 00:17:03.134 "ddgst": false, 00:17:03.134 "psk": "key0", 00:17:03.134 "allow_unrecognized_csi": false 00:17:03.134 } 00:17:03.134 } 00:17:03.134 Got JSON-RPC error response 00:17:03.134 GoRPCClient: error on JSON-RPC call 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83323 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83323 ']' 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83323 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83323 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:03.134 killing process with pid 83323 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83323' 00:17:03.134 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.134 00:17:03.134 Latency(us) 00:17:03.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.134 =================================================================================================================== 00:17:03.134 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83323 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83323 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7QzmDX9YZ 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7QzmDX9YZ 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7QzmDX9YZ 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y7QzmDX9YZ 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83362 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83362 /var/tmp/bdevperf.sock 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83362 ']' 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.134 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.392 [2024-10-01 15:30:02.323860] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:03.392 [2024-10-01 15:30:02.323972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83362 ] 00:17:03.392 [2024-10-01 15:30:02.459271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.392 [2024-10-01 15:30:02.530042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.325 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.325 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:04.325 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y7QzmDX9YZ 00:17:04.582 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:04.840 [2024-10-01 15:30:03.900559] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:04.840 [2024-10-01 15:30:03.906639] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:04.840 [2024-10-01 15:30:03.906685] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:04.840 [2024-10-01 15:30:03.906742] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:04.840 [2024-10-01 15:30:03.907243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28b80 (107): Transport endpoint is not connected 00:17:04.840 [2024-10-01 15:30:03.908229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28b80 (9): Bad file descriptor 00:17:04.840 [2024-10-01 15:30:03.909224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:04.840 [2024-10-01 15:30:03.909258] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:04.840 [2024-10-01 15:30:03.909273] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:04.840 [2024-10-01 15:30:03.909288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:04.840 2024/10/01 15:30:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:04.840 request: 00:17:04.840 { 00:17:04.840 "method": "bdev_nvme_attach_controller", 00:17:04.840 "params": { 00:17:04.840 "name": "TLSTEST", 00:17:04.840 "trtype": "tcp", 00:17:04.840 "traddr": "10.0.0.3", 00:17:04.841 "adrfam": "ipv4", 00:17:04.841 "trsvcid": "4420", 00:17:04.841 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:04.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.841 "prchk_reftag": false, 00:17:04.841 "prchk_guard": false, 00:17:04.841 "hdgst": false, 00:17:04.841 "ddgst": false, 00:17:04.841 "psk": "key0", 00:17:04.841 "allow_unrecognized_csi": false 00:17:04.841 } 00:17:04.841 } 00:17:04.841 Got JSON-RPC error response 00:17:04.841 GoRPCClient: error on JSON-RPC call 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83362 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83362 ']' 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83362 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83362 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:04.841 killing process with pid 83362 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83362' 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83362 00:17:04.841 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.841 00:17:04.841 Latency(us) 00:17:04.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.841 =================================================================================================================== 00:17:04.841 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.841 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83362 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83420 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83420 /var/tmp/bdevperf.sock 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83420 ']' 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.100 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.100 [2024-10-01 15:30:04.242588] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:05.100 [2024-10-01 15:30:04.242703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83420 ] 00:17:05.358 [2024-10-01 15:30:04.373105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.358 [2024-10-01 15:30:04.432731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.616 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.616 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:05.616 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:05.875 [2024-10-01 15:30:04.957470] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:05.875 [2024-10-01 15:30:04.957521] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:05.875 2024/10/01 15:30:04 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:17:05.875 request: 00:17:05.875 { 00:17:05.875 "method": "keyring_file_add_key", 00:17:05.875 "params": { 00:17:05.875 "name": "key0", 00:17:05.875 "path": "" 00:17:05.875 } 00:17:05.875 } 00:17:05.875 Got JSON-RPC error response 00:17:05.875 GoRPCClient: error on JSON-RPC call 00:17:05.875 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:06.443 [2024-10-01 15:30:05.305692] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.443 [2024-10-01 15:30:05.305776] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:06.443 2024/10/01 15:30:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:17:06.443 request: 00:17:06.443 { 00:17:06.443 "method": "bdev_nvme_attach_controller", 00:17:06.443 "params": { 00:17:06.443 "name": "TLSTEST", 00:17:06.443 "trtype": "tcp", 00:17:06.443 "traddr": "10.0.0.3", 00:17:06.443 "adrfam": "ipv4", 00:17:06.443 "trsvcid": "4420", 00:17:06.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.443 "prchk_reftag": false, 00:17:06.443 "prchk_guard": false, 00:17:06.443 "hdgst": false, 00:17:06.443 "ddgst": false, 00:17:06.443 "psk": "key0", 00:17:06.443 "allow_unrecognized_csi": false 00:17:06.443 } 00:17:06.443 } 00:17:06.443 Got JSON-RPC error response 00:17:06.443 GoRPCClient: error on JSON-RPC call 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83420 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83420 ']' 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83420 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83420 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83420' 00:17:06.443 killing process with pid 83420 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83420 00:17:06.443 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.443 00:17:06.443 Latency(us) 00:17:06.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.443 =================================================================================================================== 00:17:06.443 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83420 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82753 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82753 ']' 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82753 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82753 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:06.443 killing process with pid 82753 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82753' 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82753 00:17:06.443 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82753 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.z6DHlBUpjk 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.z6DHlBUpjk 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83475 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83475 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83475 ']' 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.702 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.702 [2024-10-01 15:30:05.859198] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:06.702 [2024-10-01 15:30:05.859303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.961 [2024-10-01 15:30:05.994076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.961 [2024-10-01 15:30:06.053571] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.961 [2024-10-01 15:30:06.053631] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.961 [2024-10-01 15:30:06.053643] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.961 [2024-10-01 15:30:06.053651] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.961 [2024-10-01 15:30:06.053658] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.961 [2024-10-01 15:30:06.053685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.z6DHlBUpjk 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z6DHlBUpjk 00:17:07.896 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.155 [2024-10-01 15:30:07.247675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.155 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:08.413 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:08.980 [2024-10-01 15:30:07.871771] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.980 [2024-10-01 15:30:07.872052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.980 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:09.238 malloc0 00:17:09.238 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:09.497 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:09.755 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z6DHlBUpjk 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z6DHlBUpjk 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83591 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83591 /var/tmp/bdevperf.sock 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83591 ']' 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.012 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.270 [2024-10-01 15:30:09.196324] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:10.270 [2024-10-01 15:30:09.196489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83591 ] 00:17:10.270 [2024-10-01 15:30:09.334627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.270 [2024-10-01 15:30:09.421314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.527 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.527 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:10.527 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:10.785 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:11.045 [2024-10-01 15:30:10.197540] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.309 TLSTESTn1 00:17:11.309 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:11.309 Running I/O for 10 seconds... 00:17:21.568 3608.00 IOPS, 14.09 MiB/s 3726.50 IOPS, 14.56 MiB/s 3786.00 IOPS, 14.79 MiB/s 3846.75 IOPS, 15.03 MiB/s 3861.20 IOPS, 15.08 MiB/s 3885.17 IOPS, 15.18 MiB/s 3909.29 IOPS, 15.27 MiB/s 3872.12 IOPS, 15.13 MiB/s 3868.67 IOPS, 15.11 MiB/s 3860.30 IOPS, 15.08 MiB/s 00:17:21.568 Latency(us) 00:17:21.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:21.568 Verification LBA range: start 0x0 length 0x2000 00:17:21.568 TLSTESTn1 : 10.04 3859.38 15.08 0.00 0.00 33076.18 7983.48 42896.29 00:17:21.568 =================================================================================================================== 00:17:21.568 Total : 3859.38 15.08 0.00 0.00 33076.18 7983.48 42896.29 00:17:21.568 { 00:17:21.568 "results": [ 00:17:21.568 { 00:17:21.568 "job": "TLSTESTn1", 00:17:21.568 "core_mask": "0x4", 00:17:21.568 "workload": "verify", 00:17:21.568 "status": "finished", 00:17:21.568 "verify_range": { 00:17:21.568 "start": 0, 00:17:21.568 "length": 8192 00:17:21.568 }, 00:17:21.568 "queue_depth": 128, 00:17:21.568 "io_size": 4096, 00:17:21.568 "runtime": 10.035559, 00:17:21.568 "iops": 3859.3764433052506, 00:17:21.568 "mibps": 15.075689231661135, 00:17:21.568 "io_failed": 0, 00:17:21.568 "io_timeout": 0, 00:17:21.568 "avg_latency_us": 33076.18414715955, 00:17:21.568 "min_latency_us": 7983.476363636363, 00:17:21.568 "max_latency_us": 42896.29090909091 00:17:21.568 } 00:17:21.568 ], 00:17:21.568 "core_count": 1 00:17:21.568 } 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83591 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83591 ']' 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83591 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83591 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:21.568 killing process with pid 83591 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83591' 00:17:21.568 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.568 00:17:21.568 Latency(us) 00:17:21.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.568 =================================================================================================================== 00:17:21.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83591 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83591 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.z6DHlBUpjk 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z6DHlBUpjk 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z6DHlBUpjk 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z6DHlBUpjk 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z6DHlBUpjk 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83739 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83739 /var/tmp/bdevperf.sock 00:17:21.568 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.569 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83739 ']' 00:17:21.569 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.569 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.569 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.569 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.569 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.826 [2024-10-01 15:30:20.796519] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:21.826 [2024-10-01 15:30:20.796663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83739 ] 00:17:21.826 [2024-10-01 15:30:20.935194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.084 [2024-10-01 15:30:20.995924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.084 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.084 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:22.084 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:22.343 [2024-10-01 15:30:21.353720] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z6DHlBUpjk': 0100666 00:17:22.343 [2024-10-01 15:30:21.353775] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:22.343 2024/10/01 15:30:21 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.z6DHlBUpjk], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:17:22.343 request: 00:17:22.343 { 00:17:22.343 "method": "keyring_file_add_key", 00:17:22.343 "params": { 00:17:22.343 "name": "key0", 00:17:22.343 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:22.343 } 00:17:22.343 } 00:17:22.343 Got JSON-RPC error response 00:17:22.343 GoRPCClient: error on JSON-RPC call 00:17:22.343 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:22.602 [2024-10-01 15:30:21.753911] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.602 [2024-10-01 15:30:21.753978] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:22.602 2024/10/01 15:30:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:17:22.602 request: 00:17:22.602 { 00:17:22.602 "method": "bdev_nvme_attach_controller", 00:17:22.602 "params": { 00:17:22.602 "name": "TLSTEST", 00:17:22.602 "trtype": "tcp", 00:17:22.602 "traddr": "10.0.0.3", 00:17:22.602 "adrfam": "ipv4", 00:17:22.602 "trsvcid": "4420", 00:17:22.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.602 "prchk_reftag": false, 00:17:22.602 "prchk_guard": false, 00:17:22.602 "hdgst": false, 00:17:22.602 "ddgst": false, 00:17:22.602 "psk": "key0", 00:17:22.602 "allow_unrecognized_csi": false 00:17:22.602 } 00:17:22.602 } 00:17:22.602 Got JSON-RPC error response 00:17:22.602 GoRPCClient: error on JSON-RPC call 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83739 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83739 ']' 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83739 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83739 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:22.860 killing process with pid 83739 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83739' 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83739 00:17:22.860 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.860 00:17:22.860 Latency(us) 00:17:22.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.860 =================================================================================================================== 00:17:22.860 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83739 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83475 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83475 ']' 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83475 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.860 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83475 00:17:22.860 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:22.860 killing process with pid 83475 00:17:22.860 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:22.860 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83475' 00:17:22.860 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83475 00:17:22.860 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83475 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83789 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83789 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83789 ']' 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.118 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 [2024-10-01 15:30:22.276355] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:23.118 [2024-10-01 15:30:22.276507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.376 [2024-10-01 15:30:22.416356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.376 [2024-10-01 15:30:22.481875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.376 [2024-10-01 15:30:22.481941] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.376 [2024-10-01 15:30:22.481958] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.376 [2024-10-01 15:30:22.481967] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.376 [2024-10-01 15:30:22.481974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.376 [2024-10-01 15:30:22.482002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.z6DHlBUpjk 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.z6DHlBUpjk 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.z6DHlBUpjk 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z6DHlBUpjk 00:17:24.312 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:24.573 [2024-10-01 15:30:23.688546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.573 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:24.833 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:25.091 [2024-10-01 15:30:24.236668] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:25.091 [2024-10-01 15:30:24.236969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:25.349 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:25.608 malloc0 00:17:25.608 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:25.867 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:26.126 [2024-10-01 15:30:25.223402] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z6DHlBUpjk': 0100666 00:17:26.126 [2024-10-01 15:30:25.223488] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:26.126 2024/10/01 15:30:25 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.z6DHlBUpjk], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:17:26.126 request: 00:17:26.126 { 00:17:26.126 "method": "keyring_file_add_key", 00:17:26.126 "params": { 00:17:26.126 "name": "key0", 00:17:26.126 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:26.126 } 00:17:26.126 } 00:17:26.126 Got JSON-RPC error response 00:17:26.126 GoRPCClient: error on JSON-RPC call 00:17:26.126 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:26.384 [2024-10-01 15:30:25.519506] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:26.384 [2024-10-01 15:30:25.519579] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:26.384 2024/10/01 15:30:25 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:26.384 request: 00:17:26.384 { 00:17:26.384 "method": "nvmf_subsystem_add_host", 00:17:26.384 "params": { 00:17:26.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.384 "host": "nqn.2016-06.io.spdk:host1", 00:17:26.384 "psk": "key0" 00:17:26.384 } 00:17:26.384 } 00:17:26.384 Got JSON-RPC error response 00:17:26.384 GoRPCClient: error on JSON-RPC call 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83789 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83789 ']' 00:17:26.384 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83789 00:17:26.641 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:26.641 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.641 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83789 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:26.642 killing process with pid 83789 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83789' 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83789 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83789 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.z6DHlBUpjk 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83913 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83913 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83913 ']' 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.642 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.900 [2024-10-01 15:30:25.859042] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:26.900 [2024-10-01 15:30:25.859172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.900 [2024-10-01 15:30:25.998231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.900 [2024-10-01 15:30:26.055938] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.900 [2024-10-01 15:30:26.055992] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.900 [2024-10-01 15:30:26.056004] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.900 [2024-10-01 15:30:26.056013] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.900 [2024-10-01 15:30:26.056020] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.900 [2024-10-01 15:30:26.056049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.158 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.158 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:27.158 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:27.158 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:27.159 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.159 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.159 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.z6DHlBUpjk 00:17:27.159 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z6DHlBUpjk 00:17:27.159 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:27.417 [2024-10-01 15:30:26.487529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.417 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:27.674 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:27.933 [2024-10-01 15:30:27.059672] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.933 [2024-10-01 15:30:27.059917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:27.933 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:28.500 malloc0 00:17:28.500 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:28.758 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:29.017 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84015 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84015 /var/tmp/bdevperf.sock 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84015 ']' 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.302 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.302 [2024-10-01 15:30:28.285816] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:29.302 [2024-10-01 15:30:28.285911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84015 ] 00:17:29.302 [2024-10-01 15:30:28.423152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.583 [2024-10-01 15:30:28.515622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.583 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.583 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:29.583 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:29.841 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:30.097 [2024-10-01 15:30:29.253699] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.390 TLSTESTn1 00:17:30.390 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:30.649 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:30.649 "subsystems": [ 00:17:30.649 { 00:17:30.649 "subsystem": "keyring", 00:17:30.649 "config": [ 00:17:30.649 { 00:17:30.649 "method": "keyring_file_add_key", 00:17:30.649 "params": { 00:17:30.649 "name": "key0", 00:17:30.649 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:30.649 } 00:17:30.649 } 00:17:30.649 ] 00:17:30.649 }, 00:17:30.649 { 00:17:30.649 "subsystem": "iobuf", 00:17:30.649 "config": [ 00:17:30.649 { 00:17:30.649 "method": "iobuf_set_options", 00:17:30.649 "params": { 00:17:30.649 "large_bufsize": 135168, 00:17:30.649 "large_pool_count": 1024, 00:17:30.649 "small_bufsize": 8192, 00:17:30.649 "small_pool_count": 8192 00:17:30.649 } 00:17:30.649 } 00:17:30.649 ] 00:17:30.649 }, 00:17:30.649 { 00:17:30.649 "subsystem": "sock", 00:17:30.649 "config": [ 00:17:30.649 { 00:17:30.649 "method": "sock_set_default_impl", 00:17:30.649 "params": { 00:17:30.649 "impl_name": "posix" 00:17:30.649 } 00:17:30.649 }, 00:17:30.649 { 00:17:30.649 "method": "sock_impl_set_options", 00:17:30.649 "params": { 00:17:30.649 "enable_ktls": false, 00:17:30.649 "enable_placement_id": 0, 00:17:30.649 "enable_quickack": false, 00:17:30.649 "enable_recv_pipe": true, 00:17:30.649 "enable_zerocopy_send_client": false, 00:17:30.649 "enable_zerocopy_send_server": true, 00:17:30.649 "impl_name": "ssl", 00:17:30.649 "recv_buf_size": 4096, 00:17:30.649 "send_buf_size": 4096, 00:17:30.649 "tls_version": 0, 00:17:30.649 "zerocopy_threshold": 0 00:17:30.649 } 00:17:30.649 }, 00:17:30.649 { 00:17:30.649 "method": "sock_impl_set_options", 00:17:30.649 "params": { 00:17:30.649 "enable_ktls": false, 00:17:30.649 "enable_placement_id": 0, 00:17:30.649 "enable_quickack": false, 00:17:30.649 "enable_recv_pipe": true, 00:17:30.649 "enable_zerocopy_send_client": false, 00:17:30.649 "enable_zerocopy_send_server": true, 00:17:30.649 "impl_name": "posix", 00:17:30.649 "recv_buf_size": 2097152, 00:17:30.649 "send_buf_size": 2097152, 00:17:30.649 "tls_version": 0, 00:17:30.650 "zerocopy_threshold": 0 00:17:30.650 } 00:17:30.650 } 00:17:30.650 ] 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "subsystem": "vmd", 00:17:30.650 "config": [] 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "subsystem": "accel", 00:17:30.650 "config": [ 00:17:30.650 { 00:17:30.650 "method": "accel_set_options", 00:17:30.650 "params": { 00:17:30.650 "buf_count": 2048, 00:17:30.650 "large_cache_size": 16, 00:17:30.650 "sequence_count": 2048, 00:17:30.650 "small_cache_size": 128, 00:17:30.650 "task_count": 2048 00:17:30.650 } 00:17:30.650 } 00:17:30.650 ] 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "subsystem": "bdev", 00:17:30.650 "config": [ 00:17:30.650 { 00:17:30.650 "method": "bdev_set_options", 00:17:30.650 "params": { 00:17:30.650 "bdev_auto_examine": true, 00:17:30.650 "bdev_io_cache_size": 256, 00:17:30.650 "bdev_io_pool_size": 65535, 00:17:30.650 "iobuf_large_cache_size": 16, 00:17:30.650 "iobuf_small_cache_size": 128 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "bdev_raid_set_options", 00:17:30.650 "params": { 00:17:30.650 "process_max_bandwidth_mb_sec": 0, 00:17:30.650 "process_window_size_kb": 1024 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "bdev_iscsi_set_options", 00:17:30.650 "params": { 00:17:30.650 "timeout_sec": 30 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "bdev_nvme_set_options", 00:17:30.650 "params": { 00:17:30.650 "action_on_timeout": "none", 00:17:30.650 "allow_accel_sequence": false, 00:17:30.650 "arbitration_burst": 0, 00:17:30.650 "bdev_retry_count": 3, 00:17:30.650 "ctrlr_loss_timeout_sec": 0, 00:17:30.650 "delay_cmd_submit": true, 00:17:30.650 "dhchap_dhgroups": [ 00:17:30.650 "null", 00:17:30.650 "ffdhe2048", 00:17:30.650 "ffdhe3072", 00:17:30.650 "ffdhe4096", 00:17:30.650 "ffdhe6144", 00:17:30.650 "ffdhe8192" 00:17:30.650 ], 00:17:30.650 "dhchap_digests": [ 00:17:30.650 "sha256", 00:17:30.650 "sha384", 00:17:30.650 "sha512" 00:17:30.650 ], 00:17:30.650 "disable_auto_failback": false, 00:17:30.650 "fast_io_fail_timeout_sec": 0, 00:17:30.650 "generate_uuids": false, 00:17:30.650 "high_priority_weight": 0, 00:17:30.650 "io_path_stat": false, 00:17:30.650 "io_queue_requests": 0, 00:17:30.650 "keep_alive_timeout_ms": 10000, 00:17:30.650 "low_priority_weight": 0, 00:17:30.650 "medium_priority_weight": 0, 00:17:30.650 "nvme_adminq_poll_period_us": 10000, 00:17:30.650 "nvme_error_stat": false, 00:17:30.650 "nvme_ioq_poll_period_us": 0, 00:17:30.650 "rdma_cm_event_timeout_ms": 0, 00:17:30.650 "rdma_max_cq_size": 0, 00:17:30.650 "rdma_srq_size": 0, 00:17:30.650 "reconnect_delay_sec": 0, 00:17:30.650 "timeout_admin_us": 0, 00:17:30.650 "timeout_us": 0, 00:17:30.650 "transport_ack_timeout": 0, 00:17:30.650 "transport_retry_count": 4, 00:17:30.650 "transport_tos": 0 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "bdev_nvme_set_hotplug", 00:17:30.650 "params": { 00:17:30.650 "enable": false, 00:17:30.650 "period_us": 100000 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "bdev_malloc_create", 00:17:30.650 "params": { 00:17:30.650 "block_size": 4096, 00:17:30.650 "dif_is_head_of_md": false, 00:17:30.650 "dif_pi_format": 0, 00:17:30.650 "dif_type": 0, 00:17:30.650 "md_size": 0, 00:17:30.650 "name": "malloc0", 00:17:30.650 "num_blocks": 8192, 00:17:30.650 "optimal_io_boundary": 0, 00:17:30.650 "physical_block_size": 4096, 00:17:30.650 "uuid": "420350d8-c276-4fc4-b519-e9208ef00cc4" 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "bdev_wait_for_examine" 00:17:30.650 } 00:17:30.650 ] 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "subsystem": "nbd", 00:17:30.650 "config": [] 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "subsystem": "scheduler", 00:17:30.650 "config": [ 00:17:30.650 { 00:17:30.650 "method": "framework_set_scheduler", 00:17:30.650 "params": { 00:17:30.650 "name": "static" 00:17:30.650 } 00:17:30.650 } 00:17:30.650 ] 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "subsystem": "nvmf", 00:17:30.650 "config": [ 00:17:30.650 { 00:17:30.650 "method": "nvmf_set_config", 00:17:30.650 "params": { 00:17:30.650 "admin_cmd_passthru": { 00:17:30.650 "identify_ctrlr": false 00:17:30.650 }, 00:17:30.650 "dhchap_dhgroups": [ 00:17:30.650 "null", 00:17:30.650 "ffdhe2048", 00:17:30.650 "ffdhe3072", 00:17:30.650 "ffdhe4096", 00:17:30.650 "ffdhe6144", 00:17:30.650 "ffdhe8192" 00:17:30.650 ], 00:17:30.650 "dhchap_digests": [ 00:17:30.650 "sha256", 00:17:30.650 "sha384", 00:17:30.650 "sha512" 00:17:30.650 ], 00:17:30.650 "discovery_filter": "match_any" 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_set_max_subsystems", 00:17:30.650 "params": { 00:17:30.650 "max_subsystems": 1024 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_set_crdt", 00:17:30.650 "params": { 00:17:30.650 "crdt1": 0, 00:17:30.650 "crdt2": 0, 00:17:30.650 "crdt3": 0 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_create_transport", 00:17:30.650 "params": { 00:17:30.650 "abort_timeout_sec": 1, 00:17:30.650 "ack_timeout": 0, 00:17:30.650 "buf_cache_size": 4294967295, 00:17:30.650 "c2h_success": false, 00:17:30.650 "data_wr_pool_size": 0, 00:17:30.650 "dif_insert_or_strip": false, 00:17:30.650 "in_capsule_data_size": 4096, 00:17:30.650 "io_unit_size": 131072, 00:17:30.650 "max_aq_depth": 128, 00:17:30.650 "max_io_qpairs_per_ctrlr": 127, 00:17:30.650 "max_io_size": 131072, 00:17:30.650 "max_queue_depth": 128, 00:17:30.650 "num_shared_buffers": 511, 00:17:30.650 "sock_priority": 0, 00:17:30.650 "trtype": "TCP", 00:17:30.650 "zcopy": false 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_create_subsystem", 00:17:30.650 "params": { 00:17:30.650 "allow_any_host": false, 00:17:30.650 "ana_reporting": false, 00:17:30.650 "max_cntlid": 65519, 00:17:30.650 "max_namespaces": 10, 00:17:30.650 "min_cntlid": 1, 00:17:30.650 "model_number": "SPDK bdev Controller", 00:17:30.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.650 "serial_number": "SPDK00000000000001" 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_subsystem_add_host", 00:17:30.650 "params": { 00:17:30.650 "host": "nqn.2016-06.io.spdk:host1", 00:17:30.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.650 "psk": "key0" 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_subsystem_add_ns", 00:17:30.650 "params": { 00:17:30.650 "namespace": { 00:17:30.650 "bdev_name": "malloc0", 00:17:30.650 "nguid": "420350D8C2764FC4B519E9208EF00CC4", 00:17:30.650 "no_auto_visible": false, 00:17:30.650 "nsid": 1, 00:17:30.650 "uuid": "420350d8-c276-4fc4-b519-e9208ef00cc4" 00:17:30.650 }, 00:17:30.650 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:30.650 } 00:17:30.650 }, 00:17:30.650 { 00:17:30.650 "method": "nvmf_subsystem_add_listener", 00:17:30.650 "params": { 00:17:30.650 "listen_address": { 00:17:30.650 "adrfam": "IPv4", 00:17:30.650 "traddr": "10.0.0.3", 00:17:30.650 "trsvcid": "4420", 00:17:30.650 "trtype": "TCP" 00:17:30.650 }, 00:17:30.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.651 "secure_channel": true 00:17:30.651 } 00:17:30.651 } 00:17:30.651 ] 00:17:30.651 } 00:17:30.651 ] 00:17:30.651 }' 00:17:30.651 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:31.216 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:31.216 "subsystems": [ 00:17:31.216 { 00:17:31.216 "subsystem": "keyring", 00:17:31.216 "config": [ 00:17:31.216 { 00:17:31.216 "method": "keyring_file_add_key", 00:17:31.216 "params": { 00:17:31.216 "name": "key0", 00:17:31.216 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:31.216 } 00:17:31.216 } 00:17:31.216 ] 00:17:31.216 }, 00:17:31.216 { 00:17:31.216 "subsystem": "iobuf", 00:17:31.216 "config": [ 00:17:31.216 { 00:17:31.216 "method": "iobuf_set_options", 00:17:31.216 "params": { 00:17:31.216 "large_bufsize": 135168, 00:17:31.216 "large_pool_count": 1024, 00:17:31.216 "small_bufsize": 8192, 00:17:31.216 "small_pool_count": 8192 00:17:31.216 } 00:17:31.216 } 00:17:31.216 ] 00:17:31.216 }, 00:17:31.216 { 00:17:31.216 "subsystem": "sock", 00:17:31.216 "config": [ 00:17:31.216 { 00:17:31.216 "method": "sock_set_default_impl", 00:17:31.216 "params": { 00:17:31.216 "impl_name": "posix" 00:17:31.216 } 00:17:31.216 }, 00:17:31.216 { 00:17:31.216 "method": "sock_impl_set_options", 00:17:31.216 "params": { 00:17:31.216 "enable_ktls": false, 00:17:31.216 "enable_placement_id": 0, 00:17:31.216 "enable_quickack": false, 00:17:31.216 "enable_recv_pipe": true, 00:17:31.216 "enable_zerocopy_send_client": false, 00:17:31.217 "enable_zerocopy_send_server": true, 00:17:31.217 "impl_name": "ssl", 00:17:31.217 "recv_buf_size": 4096, 00:17:31.217 "send_buf_size": 4096, 00:17:31.217 "tls_version": 0, 00:17:31.217 "zerocopy_threshold": 0 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "sock_impl_set_options", 00:17:31.217 "params": { 00:17:31.217 "enable_ktls": false, 00:17:31.217 "enable_placement_id": 0, 00:17:31.217 "enable_quickack": false, 00:17:31.217 "enable_recv_pipe": true, 00:17:31.217 "enable_zerocopy_send_client": false, 00:17:31.217 "enable_zerocopy_send_server": true, 00:17:31.217 "impl_name": "posix", 00:17:31.217 "recv_buf_size": 2097152, 00:17:31.217 "send_buf_size": 2097152, 00:17:31.217 "tls_version": 0, 00:17:31.217 "zerocopy_threshold": 0 00:17:31.217 } 00:17:31.217 } 00:17:31.217 ] 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "subsystem": "vmd", 00:17:31.217 "config": [] 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "subsystem": "accel", 00:17:31.217 "config": [ 00:17:31.217 { 00:17:31.217 "method": "accel_set_options", 00:17:31.217 "params": { 00:17:31.217 "buf_count": 2048, 00:17:31.217 "large_cache_size": 16, 00:17:31.217 "sequence_count": 2048, 00:17:31.217 "small_cache_size": 128, 00:17:31.217 "task_count": 2048 00:17:31.217 } 00:17:31.217 } 00:17:31.217 ] 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "subsystem": "bdev", 00:17:31.217 "config": [ 00:17:31.217 { 00:17:31.217 "method": "bdev_set_options", 00:17:31.217 "params": { 00:17:31.217 "bdev_auto_examine": true, 00:17:31.217 "bdev_io_cache_size": 256, 00:17:31.217 "bdev_io_pool_size": 65535, 00:17:31.217 "iobuf_large_cache_size": 16, 00:17:31.217 "iobuf_small_cache_size": 128 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "bdev_raid_set_options", 00:17:31.217 "params": { 00:17:31.217 "process_max_bandwidth_mb_sec": 0, 00:17:31.217 "process_window_size_kb": 1024 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "bdev_iscsi_set_options", 00:17:31.217 "params": { 00:17:31.217 "timeout_sec": 30 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "bdev_nvme_set_options", 00:17:31.217 "params": { 00:17:31.217 "action_on_timeout": "none", 00:17:31.217 "allow_accel_sequence": false, 00:17:31.217 "arbitration_burst": 0, 00:17:31.217 "bdev_retry_count": 3, 00:17:31.217 "ctrlr_loss_timeout_sec": 0, 00:17:31.217 "delay_cmd_submit": true, 00:17:31.217 "dhchap_dhgroups": [ 00:17:31.217 "null", 00:17:31.217 "ffdhe2048", 00:17:31.217 "ffdhe3072", 00:17:31.217 "ffdhe4096", 00:17:31.217 "ffdhe6144", 00:17:31.217 "ffdhe8192" 00:17:31.217 ], 00:17:31.217 "dhchap_digests": [ 00:17:31.217 "sha256", 00:17:31.217 "sha384", 00:17:31.217 "sha512" 00:17:31.217 ], 00:17:31.217 "disable_auto_failback": false, 00:17:31.217 "fast_io_fail_timeout_sec": 0, 00:17:31.217 "generate_uuids": false, 00:17:31.217 "high_priority_weight": 0, 00:17:31.217 "io_path_stat": false, 00:17:31.217 "io_queue_requests": 512, 00:17:31.217 "keep_alive_timeout_ms": 10000, 00:17:31.217 "low_priority_weight": 0, 00:17:31.217 "medium_priority_weight": 0, 00:17:31.217 "nvme_adminq_poll_period_us": 10000, 00:17:31.217 "nvme_error_stat": false, 00:17:31.217 "nvme_ioq_poll_period_us": 0, 00:17:31.217 "rdma_cm_event_timeout_ms": 0, 00:17:31.217 "rdma_max_cq_size": 0, 00:17:31.217 "rdma_srq_size": 0, 00:17:31.217 "reconnect_delay_sec": 0, 00:17:31.217 "timeout_admin_us": 0, 00:17:31.217 "timeout_us": 0, 00:17:31.217 "transport_ack_timeout": 0, 00:17:31.217 "transport_retry_count": 4, 00:17:31.217 "transport_tos": 0 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "bdev_nvme_attach_controller", 00:17:31.217 "params": { 00:17:31.217 "adrfam": "IPv4", 00:17:31.217 "ctrlr_loss_timeout_sec": 0, 00:17:31.217 "ddgst": false, 00:17:31.217 "fast_io_fail_timeout_sec": 0, 00:17:31.217 "hdgst": false, 00:17:31.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.217 "multipath": "multipath", 00:17:31.217 "name": "TLSTEST", 00:17:31.217 "prchk_guard": false, 00:17:31.217 "prchk_reftag": false, 00:17:31.217 "psk": "key0", 00:17:31.217 "reconnect_delay_sec": 0, 00:17:31.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.217 "traddr": "10.0.0.3", 00:17:31.217 "trsvcid": "4420", 00:17:31.217 "trtype": "TCP" 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "bdev_nvme_set_hotplug", 00:17:31.217 "params": { 00:17:31.217 "enable": false, 00:17:31.217 "period_us": 100000 00:17:31.217 } 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "method": "bdev_wait_for_examine" 00:17:31.217 } 00:17:31.217 ] 00:17:31.217 }, 00:17:31.217 { 00:17:31.217 "subsystem": "nbd", 00:17:31.217 "config": [] 00:17:31.217 } 00:17:31.217 ] 00:17:31.217 }' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84015 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84015 ']' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84015 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84015 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:31.217 killing process with pid 84015 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84015' 00:17:31.217 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.217 00:17:31.217 Latency(us) 00:17:31.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.217 =================================================================================================================== 00:17:31.217 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84015 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84015 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83913 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83913 ']' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83913 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83913 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:31.217 killing process with pid 83913 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83913' 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83913 00:17:31.217 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83913 00:17:31.476 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:31.476 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:31.476 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:31.476 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.476 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:31.476 "subsystems": [ 00:17:31.476 { 00:17:31.476 "subsystem": "keyring", 00:17:31.476 "config": [ 00:17:31.476 { 00:17:31.476 "method": "keyring_file_add_key", 00:17:31.476 "params": { 00:17:31.476 "name": "key0", 00:17:31.476 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:31.476 } 00:17:31.476 } 00:17:31.476 ] 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "subsystem": "iobuf", 00:17:31.476 "config": [ 00:17:31.476 { 00:17:31.476 "method": "iobuf_set_options", 00:17:31.476 "params": { 00:17:31.476 "large_bufsize": 135168, 00:17:31.476 "large_pool_count": 1024, 00:17:31.476 "small_bufsize": 8192, 00:17:31.476 "small_pool_count": 8192 00:17:31.476 } 00:17:31.476 } 00:17:31.476 ] 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "subsystem": "sock", 00:17:31.476 "config": [ 00:17:31.476 { 00:17:31.476 "method": "sock_set_default_impl", 00:17:31.476 "params": { 00:17:31.476 "impl_name": "posix" 00:17:31.476 } 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "method": "sock_impl_set_options", 00:17:31.476 "params": { 00:17:31.476 "enable_ktls": false, 00:17:31.476 "enable_placement_id": 0, 00:17:31.476 "enable_quickack": false, 00:17:31.476 "enable_recv_pipe": true, 00:17:31.476 "enable_zerocopy_send_client": false, 00:17:31.476 "enable_zerocopy_send_server": true, 00:17:31.476 "impl_name": "ssl", 00:17:31.476 "recv_buf_size": 4096, 00:17:31.476 "send_buf_size": 4096, 00:17:31.476 "tls_version": 0, 00:17:31.476 "zerocopy_threshold": 0 00:17:31.476 } 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "method": "sock_impl_set_options", 00:17:31.476 "params": { 00:17:31.476 "enable_ktls": false, 00:17:31.476 "enable_placement_id": 0, 00:17:31.476 "enable_quickack": false, 00:17:31.476 "enable_recv_pipe": true, 00:17:31.476 "enable_zerocopy_send_client": false, 00:17:31.476 "enable_zerocopy_send_server": true, 00:17:31.476 "impl_name": "posix", 00:17:31.476 "recv_buf_size": 2097152, 00:17:31.476 "send_buf_size": 2097152, 00:17:31.476 "tls_version": 0, 00:17:31.476 "zerocopy_threshold": 0 00:17:31.476 } 00:17:31.476 } 00:17:31.476 ] 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "subsystem": "vmd", 00:17:31.476 "config": [] 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "subsystem": "accel", 00:17:31.476 "config": [ 00:17:31.476 { 00:17:31.476 "method": "accel_set_options", 00:17:31.476 "params": { 00:17:31.476 "buf_count": 2048, 00:17:31.476 "large_cache_size": 16, 00:17:31.476 "sequence_count": 2048, 00:17:31.476 "small_cache_size": 128, 00:17:31.476 "task_count": 2048 00:17:31.476 } 00:17:31.476 } 00:17:31.476 ] 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "subsystem": "bdev", 00:17:31.476 "config": [ 00:17:31.476 { 00:17:31.476 "method": "bdev_set_options", 00:17:31.476 "params": { 00:17:31.476 "bdev_auto_examine": true, 00:17:31.476 "bdev_io_cache_size": 256, 00:17:31.476 "bdev_io_pool_size": 65535, 00:17:31.476 "iobuf_large_cache_size": 16, 00:17:31.476 "iobuf_small_cache_size": 128 00:17:31.476 } 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "method": "bdev_raid_set_options", 00:17:31.476 "params": { 00:17:31.476 "process_max_bandwidth_mb_sec": 0, 00:17:31.476 "process_window_size_kb": 1024 00:17:31.476 } 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "method": "bdev_iscsi_set_options", 00:17:31.476 "params": { 00:17:31.476 "timeout_sec": 30 00:17:31.476 } 00:17:31.476 }, 00:17:31.476 { 00:17:31.476 "method": "bdev_nvme_set_options", 00:17:31.476 "params": { 00:17:31.476 "action_on_timeout": "none", 00:17:31.476 "allow_accel_sequence": false, 00:17:31.476 "arbitration_burst": 0, 00:17:31.476 "bdev_retry_count": 3, 00:17:31.476 "ctrlr_loss_timeout_sec": 0, 00:17:31.476 "delay_cmd_submit": true, 00:17:31.476 "dhchap_dhgroups": [ 00:17:31.476 "null", 00:17:31.476 "ffdhe2048", 00:17:31.476 "ffdhe3072", 00:17:31.476 "ffdhe4096", 00:17:31.476 "ffdhe6144", 00:17:31.476 "ffdhe8192" 00:17:31.476 ], 00:17:31.476 "dhchap_digests": [ 00:17:31.476 "sha256", 00:17:31.476 "sha384", 00:17:31.476 "sha512" 00:17:31.476 ], 00:17:31.476 "disable_auto_failback": false, 00:17:31.476 "fast_io_fail_timeout_sec": 0, 00:17:31.476 "generate_uuids": false, 00:17:31.476 "high_priority_weight": 0, 00:17:31.476 "io_path_stat": false, 00:17:31.476 "io_queue_requests": 0, 00:17:31.476 "keep_alive_timeout_ms": 10000, 00:17:31.476 "low_priority_weight": 0, 00:17:31.476 "medium_priority_weight": 0, 00:17:31.476 "nvme_adminq_poll_period_us": 10000, 00:17:31.476 "nvme_error_stat": false, 00:17:31.476 "nvme_ioq_poll_period_us": 0, 00:17:31.476 "rdma_cm_event_timeout_ms": 0, 00:17:31.476 "rdma_max_cq_size": 0, 00:17:31.476 "rdma_srq_size": 0, 00:17:31.476 "reconnect_delay_sec": 0, 00:17:31.476 "timeout_admin_us": 0, 00:17:31.476 "timeout_us": 0, 00:17:31.476 "transport_ack_timeout": 0, 00:17:31.476 "transport_retry_count": 4, 00:17:31.476 "transport_tos": 0 00:17:31.476 } 00:17:31.476 }, 00:17:31.476 { 00:17:31.477 "method": "bdev_nvme_set_hotplug", 00:17:31.477 "params": { 00:17:31.477 "enable": false, 00:17:31.477 "period_us": 100000 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "bdev_malloc_create", 00:17:31.477 "params": { 00:17:31.477 "block_size": 4096, 00:17:31.477 "dif_is_head_of_md": false, 00:17:31.477 "dif_pi_format": 0, 00:17:31.477 "dif_type": 0, 00:17:31.477 "md_size": 0, 00:17:31.477 "name": "malloc0", 00:17:31.477 "num_blocks": 8192, 00:17:31.477 "optimal_io_boundary": 0, 00:17:31.477 "physical_block_size": 4096, 00:17:31.477 "uuid": "420350d8-c276-4fc4-b519-e9208ef00cc4" 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "bdev_wait_for_examine" 00:17:31.477 } 00:17:31.477 ] 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "subsystem": "nbd", 00:17:31.477 "config": [] 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "subsystem": "scheduler", 00:17:31.477 "config": [ 00:17:31.477 { 00:17:31.477 "method": "framework_set_scheduler", 00:17:31.477 "params": { 00:17:31.477 "name": "static" 00:17:31.477 } 00:17:31.477 } 00:17:31.477 ] 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "subsystem": "nvmf", 00:17:31.477 "config": [ 00:17:31.477 { 00:17:31.477 "method": "nvmf_set_config", 00:17:31.477 "params": { 00:17:31.477 "admin_cmd_passthru": { 00:17:31.477 "identify_ctrlr": false 00:17:31.477 }, 00:17:31.477 "dhchap_dhgroups": [ 00:17:31.477 "null", 00:17:31.477 "ffdhe2048", 00:17:31.477 "ffdhe3072", 00:17:31.477 "ffdhe4096", 00:17:31.477 "ffdhe6144", 00:17:31.477 "ffdhe8192" 00:17:31.477 ], 00:17:31.477 "dhchap_digests": [ 00:17:31.477 "sha256", 00:17:31.477 "sha384", 00:17:31.477 "sha512" 00:17:31.477 ], 00:17:31.477 "discovery_filter": "match_any" 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_set_max_subsystems", 00:17:31.477 "params": { 00:17:31.477 "max_subsystems": 1024 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_set_crdt", 00:17:31.477 "params": { 00:17:31.477 "crdt1": 0, 00:17:31.477 "crdt2": 0, 00:17:31.477 "crdt3": 0 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_create_transport", 00:17:31.477 "params": { 00:17:31.477 "abort_timeout_sec": 1, 00:17:31.477 "ack_timeout": 0, 00:17:31.477 "buf_cache_size": 4294967295, 00:17:31.477 "c2h_success": false, 00:17:31.477 "data_wr_pool_size": 0, 00:17:31.477 "dif_insert_or_strip": false, 00:17:31.477 "in_capsule_data_size": 4096, 00:17:31.477 "io_unit_size": 131072, 00:17:31.477 "max_aq_depth": 128, 00:17:31.477 "max_io_qpairs_per_ctrlr": 127, 00:17:31.477 "max_io_size": 131072, 00:17:31.477 "max_queue_depth": 128, 00:17:31.477 "num_shared_buffers": 511, 00:17:31.477 "sock_priority": 0, 00:17:31.477 "trtype": "TCP", 00:17:31.477 "zcopy": false 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_create_subsystem", 00:17:31.477 "params": { 00:17:31.477 "allow_any_host": false, 00:17:31.477 "ana_reporting": false, 00:17:31.477 "max_cntlid": 65519, 00:17:31.477 "max_namespaces": 10, 00:17:31.477 "min_cntlid": 1, 00:17:31.477 "model_number": "SPDK bdev Controller", 00:17:31.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.477 "serial_number": "SPDK00000000000001" 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_subsystem_add_host", 00:17:31.477 "params": { 00:17:31.477 "host": "nqn.2016-06.io.spdk:host1", 00:17:31.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.477 "psk": "key0" 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_subsystem_add_ns", 00:17:31.477 "params": { 00:17:31.477 "namespace": { 00:17:31.477 "bdev_name": "malloc0", 00:17:31.477 "nguid": "420350D8C2764FC4B519E9208EF00CC4", 00:17:31.477 "no_auto_visible": false, 00:17:31.477 "nsid": 1, 00:17:31.477 "uuid": "420350d8-c276-4fc4-b519-e9208ef00cc4" 00:17:31.477 }, 00:17:31.477 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:31.477 } 00:17:31.477 }, 00:17:31.477 { 00:17:31.477 "method": "nvmf_subsystem_add_listener", 00:17:31.477 "params": { 00:17:31.477 "listen_address": { 00:17:31.477 "adrfam": "IPv4", 00:17:31.477 "traddr": "10.0.0.3", 00:17:31.477 "trsvcid": "4420", 00:17:31.477 "trtype": "TCP" 00:17:31.477 }, 00:17:31.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.477 "secure_channel": true 00:17:31.477 } 00:17:31.477 } 00:17:31.477 ] 00:17:31.477 } 00:17:31.477 ] 00:17:31.477 }' 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84087 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84087 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84087 ']' 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.477 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.477 [2024-10-01 15:30:30.561003] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:31.477 [2024-10-01 15:30:30.561094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.734 [2024-10-01 15:30:30.697929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.734 [2024-10-01 15:30:30.784077] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.734 [2024-10-01 15:30:30.784147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.734 [2024-10-01 15:30:30.784167] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.734 [2024-10-01 15:30:30.784183] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.734 [2024-10-01 15:30:30.784195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.734 [2024-10-01 15:30:30.784315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.992 [2024-10-01 15:30:30.976813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.992 [2024-10-01 15:30:31.015259] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:31.992 [2024-10-01 15:30:31.015540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84132 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84132 /var/tmp/bdevperf.sock 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84132 ']' 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.557 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:32.557 "subsystems": [ 00:17:32.557 { 00:17:32.557 "subsystem": "keyring", 00:17:32.557 "config": [ 00:17:32.557 { 00:17:32.557 "method": "keyring_file_add_key", 00:17:32.557 "params": { 00:17:32.557 "name": "key0", 00:17:32.558 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:32.558 } 00:17:32.558 } 00:17:32.558 ] 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "subsystem": "iobuf", 00:17:32.558 "config": [ 00:17:32.558 { 00:17:32.558 "method": "iobuf_set_options", 00:17:32.558 "params": { 00:17:32.558 "large_bufsize": 135168, 00:17:32.558 "large_pool_count": 1024, 00:17:32.558 "small_bufsize": 8192, 00:17:32.558 "small_pool_count": 8192 00:17:32.558 } 00:17:32.558 } 00:17:32.558 ] 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "subsystem": "sock", 00:17:32.558 "config": [ 00:17:32.558 { 00:17:32.558 "method": "sock_set_default_impl", 00:17:32.558 "params": { 00:17:32.558 "impl_name": "posix" 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "sock_impl_set_options", 00:17:32.558 "params": { 00:17:32.558 "enable_ktls": false, 00:17:32.558 "enable_placement_id": 0, 00:17:32.558 "enable_quickack": false, 00:17:32.558 "enable_recv_pipe": true, 00:17:32.558 "enable_zerocopy_send_client": false, 00:17:32.558 "enable_zerocopy_send_server": true, 00:17:32.558 "impl_name": "ssl", 00:17:32.558 "recv_buf_size": 4096, 00:17:32.558 "send_buf_size": 4096, 00:17:32.558 "tls_version": 0, 00:17:32.558 "zerocopy_threshold": 0 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "sock_impl_set_options", 00:17:32.558 "params": { 00:17:32.558 "enable_ktls": false, 00:17:32.558 "enable_placement_id": 0, 00:17:32.558 "enable_quickack": false, 00:17:32.558 "enable_recv_pipe": true, 00:17:32.558 "enable_zerocopy_send_client": false, 00:17:32.558 "enable_zerocopy_send_server": true, 00:17:32.558 "impl_name": "posix", 00:17:32.558 "recv_buf_size": 2097152, 00:17:32.558 "send_buf_size": 2097152, 00:17:32.558 "tls_version": 0, 00:17:32.558 "zerocopy_threshold": 0 00:17:32.558 } 00:17:32.558 } 00:17:32.558 ] 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "subsystem": "vmd", 00:17:32.558 "config": [] 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "subsystem": "accel", 00:17:32.558 "config": [ 00:17:32.558 { 00:17:32.558 "method": "accel_set_options", 00:17:32.558 "params": { 00:17:32.558 "buf_count": 2048, 00:17:32.558 "large_cache_size": 16, 00:17:32.558 "sequence_count": 2048, 00:17:32.558 "small_cache_size": 128, 00:17:32.558 "task_count": 2048 00:17:32.558 } 00:17:32.558 } 00:17:32.558 ] 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "subsystem": "bdev", 00:17:32.558 "config": [ 00:17:32.558 { 00:17:32.558 "method": "bdev_set_options", 00:17:32.558 "params": { 00:17:32.558 "bdev_auto_examine": true, 00:17:32.558 "bdev_io_cache_size": 256, 00:17:32.558 "bdev_io_pool_size": 65535, 00:17:32.558 "iobuf_large_cache_size": 16, 00:17:32.558 "iobuf_small_cache_size": 128 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "bdev_raid_set_options", 00:17:32.558 "params": { 00:17:32.558 "process_max_bandwidth_mb_sec": 0, 00:17:32.558 "process_window_size_kb": 1024 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "bdev_iscsi_set_options", 00:17:32.558 "params": { 00:17:32.558 "timeout_sec": 30 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "bdev_nvme_set_options", 00:17:32.558 "params": { 00:17:32.558 "action_on_timeout": "none", 00:17:32.558 "allow_accel_sequence": false, 00:17:32.558 "arbitration_burst": 0, 00:17:32.558 "bdev_retry_count": 3, 00:17:32.558 "ctrlr_loss_timeout_sec": 0, 00:17:32.558 "delay_cmd_submit": true, 00:17:32.558 "dhchap_dhgroups": [ 00:17:32.558 "null", 00:17:32.558 "ffdhe2048", 00:17:32.558 "ffdhe3072", 00:17:32.558 "ffdhe4096", 00:17:32.558 "ffdhe6144", 00:17:32.558 "ffdhe8192" 00:17:32.558 ], 00:17:32.558 "dhchap_digests": [ 00:17:32.558 "sha256", 00:17:32.558 "sha384", 00:17:32.558 "sha512" 00:17:32.558 ], 00:17:32.558 "disable_auto_failback": false, 00:17:32.558 "fast_io_fail_timeout_sec": 0, 00:17:32.558 "generate_uuids": false, 00:17:32.558 "high_priority_weight": 0, 00:17:32.558 "io_path_stat": false, 00:17:32.558 "io_queue_requests": 512, 00:17:32.558 "keep_alive_timeout_ms": 10000, 00:17:32.558 "low_priority_weight": 0, 00:17:32.558 "medium_priority_weight": 0, 00:17:32.558 "nvme_adminq_poll_period_us": 10000, 00:17:32.558 "nvme_error_stat": false, 00:17:32.558 "nvme_ioq_poll_period_us": 0, 00:17:32.558 "rdma_cm_event_timeout_ms": 0, 00:17:32.558 "rdma_max_cq_size": 0, 00:17:32.558 "rdma_srq_size": 0, 00:17:32.558 "reconnect_delay_sec": 0, 00:17:32.558 "timeout_admin_us": 0, 00:17:32.558 "timeout_us": 0, 00:17:32.558 "transport_ack_timeout": 0, 00:17:32.558 "transport_retry_count": 4, 00:17:32.558 "transport_tos": 0 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "bdev_nvme_attach_controller", 00:17:32.558 "params": { 00:17:32.558 "adrfam": "IPv4", 00:17:32.558 "ctrlr_loss_timeout_sec": 0, 00:17:32.558 "ddgst": false, 00:17:32.558 "fast_io_fail_timeout_sec": 0, 00:17:32.558 "hdgst": false, 00:17:32.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.558 "multipath": "multipath", 00:17:32.558 "name": "TLSTEST", 00:17:32.558 "prchk_guard": false, 00:17:32.558 "prchk_reftag": false, 00:17:32.558 "psk": "key0", 00:17:32.558 "reconnect_delay_sec": 0, 00:17:32.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.558 "traddr": "10.0.0.3", 00:17:32.558 "trsvcid": "4420", 00:17:32.558 "trtype": "TCP" 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "bdev_nvme_set_hotplug", 00:17:32.558 "params": { 00:17:32.558 "enable": false, 00:17:32.558 "period_us": 100000 00:17:32.558 } 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "method": "bdev_wait_for_examine" 00:17:32.558 } 00:17:32.558 ] 00:17:32.558 }, 00:17:32.558 { 00:17:32.558 "subsystem": "nbd", 00:17:32.558 "config": [] 00:17:32.558 } 00:17:32.558 ] 00:17:32.558 }' 00:17:32.558 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.558 [2024-10-01 15:30:31.718230] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:32.558 [2024-10-01 15:30:31.718325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84132 ] 00:17:32.816 [2024-10-01 15:30:31.858298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.816 [2024-10-01 15:30:31.928998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.073 [2024-10-01 15:30:32.070961] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.028 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.028 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:34.028 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:34.028 Running I/O for 10 seconds... 00:17:44.249 3754.00 IOPS, 14.66 MiB/s 3811.00 IOPS, 14.89 MiB/s 3838.33 IOPS, 14.99 MiB/s 3864.75 IOPS, 15.10 MiB/s 3890.00 IOPS, 15.20 MiB/s 3853.67 IOPS, 15.05 MiB/s 3804.57 IOPS, 14.86 MiB/s 3715.50 IOPS, 14.51 MiB/s 3678.89 IOPS, 14.37 MiB/s 3684.10 IOPS, 14.39 MiB/s 00:17:44.249 Latency(us) 00:17:44.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.249 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:44.249 Verification LBA range: start 0x0 length 0x2000 00:17:44.249 TLSTESTn1 : 10.02 3690.67 14.42 0.00 0.00 34619.05 5659.93 36461.85 00:17:44.249 =================================================================================================================== 00:17:44.249 Total : 3690.67 14.42 0.00 0.00 34619.05 5659.93 36461.85 00:17:44.249 { 00:17:44.249 "results": [ 00:17:44.249 { 00:17:44.249 "job": "TLSTESTn1", 00:17:44.249 "core_mask": "0x4", 00:17:44.249 "workload": "verify", 00:17:44.249 "status": "finished", 00:17:44.249 "verify_range": { 00:17:44.249 "start": 0, 00:17:44.249 "length": 8192 00:17:44.249 }, 00:17:44.249 "queue_depth": 128, 00:17:44.249 "io_size": 4096, 00:17:44.249 "runtime": 10.016339, 00:17:44.249 "iops": 3690.6698145899413, 00:17:44.249 "mibps": 14.416678963241958, 00:17:44.249 "io_failed": 0, 00:17:44.249 "io_timeout": 0, 00:17:44.249 "avg_latency_us": 34619.05109392406, 00:17:44.249 "min_latency_us": 5659.927272727273, 00:17:44.249 "max_latency_us": 36461.847272727275 00:17:44.249 } 00:17:44.249 ], 00:17:44.249 "core_count": 1 00:17:44.249 } 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84132 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84132 ']' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84132 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84132 00:17:44.249 killing process with pid 84132 00:17:44.249 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.249 00:17:44.249 Latency(us) 00:17:44.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.249 =================================================================================================================== 00:17:44.249 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84132' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84132 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84132 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84087 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84087 ']' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84087 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84087 00:17:44.249 killing process with pid 84087 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84087' 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84087 00:17:44.249 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84087 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84285 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84285 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84285 ']' 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.507 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 [2024-10-01 15:30:43.621380] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:44.507 [2024-10-01 15:30:43.621536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.767 [2024-10-01 15:30:43.764946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.767 [2024-10-01 15:30:43.826396] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.767 [2024-10-01 15:30:43.826471] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.767 [2024-10-01 15:30:43.826483] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.767 [2024-10-01 15:30:43.826492] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.767 [2024-10-01 15:30:43.826499] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.767 [2024-10-01 15:30:43.826535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.z6DHlBUpjk 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z6DHlBUpjk 00:17:45.726 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.984 [2024-10-01 15:30:44.930297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.984 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:46.243 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:46.501 [2024-10-01 15:30:45.490448] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.501 [2024-10-01 15:30:45.490675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:46.501 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:46.759 malloc0 00:17:46.759 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.019 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:47.277 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84395 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84395 /var/tmp/bdevperf.sock 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84395 ']' 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.844 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.844 [2024-10-01 15:30:46.766769] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:47.844 [2024-10-01 15:30:46.766872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84395 ] 00:17:47.844 [2024-10-01 15:30:46.909245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.844 [2024-10-01 15:30:47.004030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.780 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.780 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:48.780 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:49.038 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:49.297 [2024-10-01 15:30:48.370567] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.297 nvme0n1 00:17:49.297 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:49.555 Running I/O for 1 seconds... 00:17:50.488 3826.00 IOPS, 14.95 MiB/s 00:17:50.488 Latency(us) 00:17:50.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.488 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:50.488 Verification LBA range: start 0x0 length 0x2000 00:17:50.488 nvme0n1 : 1.02 3889.63 15.19 0.00 0.00 32663.19 6166.34 28001.75 00:17:50.488 =================================================================================================================== 00:17:50.488 Total : 3889.63 15.19 0.00 0.00 32663.19 6166.34 28001.75 00:17:50.488 { 00:17:50.488 "results": [ 00:17:50.488 { 00:17:50.488 "job": "nvme0n1", 00:17:50.488 "core_mask": "0x2", 00:17:50.488 "workload": "verify", 00:17:50.488 "status": "finished", 00:17:50.488 "verify_range": { 00:17:50.488 "start": 0, 00:17:50.488 "length": 8192 00:17:50.488 }, 00:17:50.488 "queue_depth": 128, 00:17:50.488 "io_size": 4096, 00:17:50.488 "runtime": 1.01655, 00:17:50.488 "iops": 3889.6266784713, 00:17:50.488 "mibps": 15.193854212778515, 00:17:50.488 "io_failed": 0, 00:17:50.488 "io_timeout": 0, 00:17:50.488 "avg_latency_us": 32663.188206189352, 00:17:50.488 "min_latency_us": 6166.341818181818, 00:17:50.488 "max_latency_us": 28001.745454545453 00:17:50.488 } 00:17:50.488 ], 00:17:50.488 "core_count": 1 00:17:50.488 } 00:17:50.488 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84395 00:17:50.488 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84395 ']' 00:17:50.488 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84395 00:17:50.488 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:50.488 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.488 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84395 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:50.748 killing process with pid 84395 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84395' 00:17:50.748 Received shutdown signal, test time was about 1.000000 seconds 00:17:50.748 00:17:50.748 Latency(us) 00:17:50.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.748 =================================================================================================================== 00:17:50.748 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84395 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84395 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84285 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84285 ']' 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84285 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84285 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.748 killing process with pid 84285 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84285' 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84285 00:17:50.748 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84285 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84470 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84470 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84470 ']' 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.008 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.008 [2024-10-01 15:30:50.116047] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:51.008 [2024-10-01 15:30:50.116156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.268 [2024-10-01 15:30:50.253543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.268 [2024-10-01 15:30:50.312321] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.268 [2024-10-01 15:30:50.312387] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.268 [2024-10-01 15:30:50.312402] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.268 [2024-10-01 15:30:50.312411] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.268 [2024-10-01 15:30:50.312418] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.268 [2024-10-01 15:30:50.312458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.203 [2024-10-01 15:30:51.181935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.203 malloc0 00:17:52.203 [2024-10-01 15:30:51.209042] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.203 [2024-10-01 15:30:51.209272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84520 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84520 /var/tmp/bdevperf.sock 00:17:52.203 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:52.204 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84520 ']' 00:17:52.204 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.204 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.204 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.204 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.204 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.204 [2024-10-01 15:30:51.318236] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:52.204 [2024-10-01 15:30:51.318351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84520 ] 00:17:52.462 [2024-10-01 15:30:51.471445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.462 [2024-10-01 15:30:51.549489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.396 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.396 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:53.396 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z6DHlBUpjk 00:17:53.654 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:54.221 [2024-10-01 15:30:53.128691] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.221 nvme0n1 00:17:54.221 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.221 Running I/O for 1 seconds... 00:17:55.618 3731.00 IOPS, 14.57 MiB/s 00:17:55.618 Latency(us) 00:17:55.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.618 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:55.618 Verification LBA range: start 0x0 length 0x2000 00:17:55.618 nvme0n1 : 1.02 3795.77 14.83 0.00 0.00 33483.01 4230.05 30980.65 00:17:55.618 =================================================================================================================== 00:17:55.618 Total : 3795.77 14.83 0.00 0.00 33483.01 4230.05 30980.65 00:17:55.618 { 00:17:55.618 "results": [ 00:17:55.618 { 00:17:55.618 "job": "nvme0n1", 00:17:55.618 "core_mask": "0x2", 00:17:55.618 "workload": "verify", 00:17:55.618 "status": "finished", 00:17:55.618 "verify_range": { 00:17:55.618 "start": 0, 00:17:55.618 "length": 8192 00:17:55.618 }, 00:17:55.618 "queue_depth": 128, 00:17:55.618 "io_size": 4096, 00:17:55.618 "runtime": 1.016658, 00:17:55.618 "iops": 3795.7700623021706, 00:17:55.618 "mibps": 14.827226805867854, 00:17:55.618 "io_failed": 0, 00:17:55.618 "io_timeout": 0, 00:17:55.618 "avg_latency_us": 33483.00765624632, 00:17:55.618 "min_latency_us": 4230.050909090909, 00:17:55.618 "max_latency_us": 30980.654545454545 00:17:55.618 } 00:17:55.618 ], 00:17:55.618 "core_count": 1 00:17:55.618 } 00:17:55.618 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:55.618 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.618 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.618 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.618 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:55.618 "subsystems": [ 00:17:55.618 { 00:17:55.618 "subsystem": "keyring", 00:17:55.618 "config": [ 00:17:55.618 { 00:17:55.618 "method": "keyring_file_add_key", 00:17:55.618 "params": { 00:17:55.618 "name": "key0", 00:17:55.618 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:55.618 } 00:17:55.618 } 00:17:55.618 ] 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "subsystem": "iobuf", 00:17:55.618 "config": [ 00:17:55.618 { 00:17:55.618 "method": "iobuf_set_options", 00:17:55.618 "params": { 00:17:55.618 "large_bufsize": 135168, 00:17:55.618 "large_pool_count": 1024, 00:17:55.618 "small_bufsize": 8192, 00:17:55.618 "small_pool_count": 8192 00:17:55.618 } 00:17:55.618 } 00:17:55.618 ] 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "subsystem": "sock", 00:17:55.618 "config": [ 00:17:55.618 { 00:17:55.618 "method": "sock_set_default_impl", 00:17:55.618 "params": { 00:17:55.618 "impl_name": "posix" 00:17:55.618 } 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "method": "sock_impl_set_options", 00:17:55.618 "params": { 00:17:55.618 "enable_ktls": false, 00:17:55.618 "enable_placement_id": 0, 00:17:55.618 "enable_quickack": false, 00:17:55.618 "enable_recv_pipe": true, 00:17:55.618 "enable_zerocopy_send_client": false, 00:17:55.618 "enable_zerocopy_send_server": true, 00:17:55.618 "impl_name": "ssl", 00:17:55.618 "recv_buf_size": 4096, 00:17:55.618 "send_buf_size": 4096, 00:17:55.618 "tls_version": 0, 00:17:55.618 "zerocopy_threshold": 0 00:17:55.618 } 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "method": "sock_impl_set_options", 00:17:55.618 "params": { 00:17:55.618 "enable_ktls": false, 00:17:55.618 "enable_placement_id": 0, 00:17:55.618 "enable_quickack": false, 00:17:55.618 "enable_recv_pipe": true, 00:17:55.618 "enable_zerocopy_send_client": false, 00:17:55.618 "enable_zerocopy_send_server": true, 00:17:55.618 "impl_name": "posix", 00:17:55.618 "recv_buf_size": 2097152, 00:17:55.618 "send_buf_size": 2097152, 00:17:55.618 "tls_version": 0, 00:17:55.618 "zerocopy_threshold": 0 00:17:55.618 } 00:17:55.618 } 00:17:55.618 ] 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "subsystem": "vmd", 00:17:55.618 "config": [] 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "subsystem": "accel", 00:17:55.618 "config": [ 00:17:55.618 { 00:17:55.618 "method": "accel_set_options", 00:17:55.618 "params": { 00:17:55.618 "buf_count": 2048, 00:17:55.618 "large_cache_size": 16, 00:17:55.618 "sequence_count": 2048, 00:17:55.618 "small_cache_size": 128, 00:17:55.618 "task_count": 2048 00:17:55.618 } 00:17:55.618 } 00:17:55.618 ] 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "subsystem": "bdev", 00:17:55.618 "config": [ 00:17:55.618 { 00:17:55.618 "method": "bdev_set_options", 00:17:55.618 "params": { 00:17:55.618 "bdev_auto_examine": true, 00:17:55.618 "bdev_io_cache_size": 256, 00:17:55.618 "bdev_io_pool_size": 65535, 00:17:55.618 "iobuf_large_cache_size": 16, 00:17:55.618 "iobuf_small_cache_size": 128 00:17:55.618 } 00:17:55.618 }, 00:17:55.618 { 00:17:55.618 "method": "bdev_raid_set_options", 00:17:55.618 "params": { 00:17:55.618 "process_max_bandwidth_mb_sec": 0, 00:17:55.618 "process_window_size_kb": 1024 00:17:55.618 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "bdev_iscsi_set_options", 00:17:55.619 "params": { 00:17:55.619 "timeout_sec": 30 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "bdev_nvme_set_options", 00:17:55.619 "params": { 00:17:55.619 "action_on_timeout": "none", 00:17:55.619 "allow_accel_sequence": false, 00:17:55.619 "arbitration_burst": 0, 00:17:55.619 "bdev_retry_count": 3, 00:17:55.619 "ctrlr_loss_timeout_sec": 0, 00:17:55.619 "delay_cmd_submit": true, 00:17:55.619 "dhchap_dhgroups": [ 00:17:55.619 "null", 00:17:55.619 "ffdhe2048", 00:17:55.619 "ffdhe3072", 00:17:55.619 "ffdhe4096", 00:17:55.619 "ffdhe6144", 00:17:55.619 "ffdhe8192" 00:17:55.619 ], 00:17:55.619 "dhchap_digests": [ 00:17:55.619 "sha256", 00:17:55.619 "sha384", 00:17:55.619 "sha512" 00:17:55.619 ], 00:17:55.619 "disable_auto_failback": false, 00:17:55.619 "fast_io_fail_timeout_sec": 0, 00:17:55.619 "generate_uuids": false, 00:17:55.619 "high_priority_weight": 0, 00:17:55.619 "io_path_stat": false, 00:17:55.619 "io_queue_requests": 0, 00:17:55.619 "keep_alive_timeout_ms": 10000, 00:17:55.619 "low_priority_weight": 0, 00:17:55.619 "medium_priority_weight": 0, 00:17:55.619 "nvme_adminq_poll_period_us": 10000, 00:17:55.619 "nvme_error_stat": false, 00:17:55.619 "nvme_ioq_poll_period_us": 0, 00:17:55.619 "rdma_cm_event_timeout_ms": 0, 00:17:55.619 "rdma_max_cq_size": 0, 00:17:55.619 "rdma_srq_size": 0, 00:17:55.619 "reconnect_delay_sec": 0, 00:17:55.619 "timeout_admin_us": 0, 00:17:55.619 "timeout_us": 0, 00:17:55.619 "transport_ack_timeout": 0, 00:17:55.619 "transport_retry_count": 4, 00:17:55.619 "transport_tos": 0 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "bdev_nvme_set_hotplug", 00:17:55.619 "params": { 00:17:55.619 "enable": false, 00:17:55.619 "period_us": 100000 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "bdev_malloc_create", 00:17:55.619 "params": { 00:17:55.619 "block_size": 4096, 00:17:55.619 "dif_is_head_of_md": false, 00:17:55.619 "dif_pi_format": 0, 00:17:55.619 "dif_type": 0, 00:17:55.619 "md_size": 0, 00:17:55.619 "name": "malloc0", 00:17:55.619 "num_blocks": 8192, 00:17:55.619 "optimal_io_boundary": 0, 00:17:55.619 "physical_block_size": 4096, 00:17:55.619 "uuid": "657c2022-40cf-466d-a122-98bfa4179227" 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "bdev_wait_for_examine" 00:17:55.619 } 00:17:55.619 ] 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "subsystem": "nbd", 00:17:55.619 "config": [] 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "subsystem": "scheduler", 00:17:55.619 "config": [ 00:17:55.619 { 00:17:55.619 "method": "framework_set_scheduler", 00:17:55.619 "params": { 00:17:55.619 "name": "static" 00:17:55.619 } 00:17:55.619 } 00:17:55.619 ] 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "subsystem": "nvmf", 00:17:55.619 "config": [ 00:17:55.619 { 00:17:55.619 "method": "nvmf_set_config", 00:17:55.619 "params": { 00:17:55.619 "admin_cmd_passthru": { 00:17:55.619 "identify_ctrlr": false 00:17:55.619 }, 00:17:55.619 "dhchap_dhgroups": [ 00:17:55.619 "null", 00:17:55.619 "ffdhe2048", 00:17:55.619 "ffdhe3072", 00:17:55.619 "ffdhe4096", 00:17:55.619 "ffdhe6144", 00:17:55.619 "ffdhe8192" 00:17:55.619 ], 00:17:55.619 "dhchap_digests": [ 00:17:55.619 "sha256", 00:17:55.619 "sha384", 00:17:55.619 "sha512" 00:17:55.619 ], 00:17:55.619 "discovery_filter": "match_any" 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_set_max_subsystems", 00:17:55.619 "params": { 00:17:55.619 "max_subsystems": 1024 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_set_crdt", 00:17:55.619 "params": { 00:17:55.619 "crdt1": 0, 00:17:55.619 "crdt2": 0, 00:17:55.619 "crdt3": 0 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_create_transport", 00:17:55.619 "params": { 00:17:55.619 "abort_timeout_sec": 1, 00:17:55.619 "ack_timeout": 0, 00:17:55.619 "buf_cache_size": 4294967295, 00:17:55.619 "c2h_success": false, 00:17:55.619 "data_wr_pool_size": 0, 00:17:55.619 "dif_insert_or_strip": false, 00:17:55.619 "in_capsule_data_size": 4096, 00:17:55.619 "io_unit_size": 131072, 00:17:55.619 "max_aq_depth": 128, 00:17:55.619 "max_io_qpairs_per_ctrlr": 127, 00:17:55.619 "max_io_size": 131072, 00:17:55.619 "max_queue_depth": 128, 00:17:55.619 "num_shared_buffers": 511, 00:17:55.619 "sock_priority": 0, 00:17:55.619 "trtype": "TCP", 00:17:55.619 "zcopy": false 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_create_subsystem", 00:17:55.619 "params": { 00:17:55.619 "allow_any_host": false, 00:17:55.619 "ana_reporting": false, 00:17:55.619 "max_cntlid": 65519, 00:17:55.619 "max_namespaces": 32, 00:17:55.619 "min_cntlid": 1, 00:17:55.619 "model_number": "SPDK bdev Controller", 00:17:55.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.619 "serial_number": "00000000000000000000" 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_subsystem_add_host", 00:17:55.619 "params": { 00:17:55.619 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.619 "psk": "key0" 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_subsystem_add_ns", 00:17:55.619 "params": { 00:17:55.619 "namespace": { 00:17:55.619 "bdev_name": "malloc0", 00:17:55.619 "nguid": "657C202240CF466DA12298BFA4179227", 00:17:55.619 "no_auto_visible": false, 00:17:55.619 "nsid": 1, 00:17:55.619 "uuid": "657c2022-40cf-466d-a122-98bfa4179227" 00:17:55.619 }, 00:17:55.619 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:55.619 } 00:17:55.619 }, 00:17:55.619 { 00:17:55.619 "method": "nvmf_subsystem_add_listener", 00:17:55.619 "params": { 00:17:55.619 "listen_address": { 00:17:55.619 "adrfam": "IPv4", 00:17:55.619 "traddr": "10.0.0.3", 00:17:55.619 "trsvcid": "4420", 00:17:55.619 "trtype": "TCP" 00:17:55.619 }, 00:17:55.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.619 "secure_channel": false, 00:17:55.619 "sock_impl": "ssl" 00:17:55.619 } 00:17:55.619 } 00:17:55.619 ] 00:17:55.619 } 00:17:55.619 ] 00:17:55.619 }' 00:17:55.619 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:55.878 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:55.878 "subsystems": [ 00:17:55.878 { 00:17:55.878 "subsystem": "keyring", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "keyring_file_add_key", 00:17:55.878 "params": { 00:17:55.878 "name": "key0", 00:17:55.878 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "iobuf", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "iobuf_set_options", 00:17:55.878 "params": { 00:17:55.878 "large_bufsize": 135168, 00:17:55.878 "large_pool_count": 1024, 00:17:55.878 "small_bufsize": 8192, 00:17:55.878 "small_pool_count": 8192 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "sock", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "sock_set_default_impl", 00:17:55.878 "params": { 00:17:55.878 "impl_name": "posix" 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "sock_impl_set_options", 00:17:55.878 "params": { 00:17:55.878 "enable_ktls": false, 00:17:55.878 "enable_placement_id": 0, 00:17:55.878 "enable_quickack": false, 00:17:55.878 "enable_recv_pipe": true, 00:17:55.878 "enable_zerocopy_send_client": false, 00:17:55.878 "enable_zerocopy_send_server": true, 00:17:55.878 "impl_name": "ssl", 00:17:55.878 "recv_buf_size": 4096, 00:17:55.878 "send_buf_size": 4096, 00:17:55.878 "tls_version": 0, 00:17:55.878 "zerocopy_threshold": 0 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "sock_impl_set_options", 00:17:55.878 "params": { 00:17:55.878 "enable_ktls": false, 00:17:55.878 "enable_placement_id": 0, 00:17:55.878 "enable_quickack": false, 00:17:55.878 "enable_recv_pipe": true, 00:17:55.878 "enable_zerocopy_send_client": false, 00:17:55.878 "enable_zerocopy_send_server": true, 00:17:55.878 "impl_name": "posix", 00:17:55.878 "recv_buf_size": 2097152, 00:17:55.878 "send_buf_size": 2097152, 00:17:55.878 "tls_version": 0, 00:17:55.878 "zerocopy_threshold": 0 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "vmd", 00:17:55.878 "config": [] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "accel", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "accel_set_options", 00:17:55.878 "params": { 00:17:55.878 "buf_count": 2048, 00:17:55.878 "large_cache_size": 16, 00:17:55.878 "sequence_count": 2048, 00:17:55.878 "small_cache_size": 128, 00:17:55.878 "task_count": 2048 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "bdev", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "bdev_set_options", 00:17:55.878 "params": { 00:17:55.878 "bdev_auto_examine": true, 00:17:55.878 "bdev_io_cache_size": 256, 00:17:55.878 "bdev_io_pool_size": 65535, 00:17:55.878 "iobuf_large_cache_size": 16, 00:17:55.878 "iobuf_small_cache_size": 128 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "bdev_raid_set_options", 00:17:55.878 "params": { 00:17:55.878 "process_max_bandwidth_mb_sec": 0, 00:17:55.878 "process_window_size_kb": 1024 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "bdev_iscsi_set_options", 00:17:55.878 "params": { 00:17:55.878 "timeout_sec": 30 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "bdev_nvme_set_options", 00:17:55.878 "params": { 00:17:55.878 "action_on_timeout": "none", 00:17:55.878 "allow_accel_sequence": false, 00:17:55.878 "arbitration_burst": 0, 00:17:55.878 "bdev_retry_count": 3, 00:17:55.878 "ctrlr_loss_timeout_sec": 0, 00:17:55.878 "delay_cmd_submit": true, 00:17:55.878 "dhchap_dhgroups": [ 00:17:55.878 "null", 00:17:55.878 "ffdhe2048", 00:17:55.878 "ffdhe3072", 00:17:55.878 "ffdhe4096", 00:17:55.878 "ffdhe6144", 00:17:55.878 "ffdhe8192" 00:17:55.878 ], 00:17:55.878 "dhchap_digests": [ 00:17:55.878 "sha256", 00:17:55.878 "sha384", 00:17:55.878 "sha512" 00:17:55.878 ], 00:17:55.878 "disable_auto_failback": false, 00:17:55.878 "fast_io_fail_timeout_sec": 0, 00:17:55.878 "generate_uuids": false, 00:17:55.878 "high_priority_weight": 0, 00:17:55.879 "io_path_stat": false, 00:17:55.879 "io_queue_requests": 512, 00:17:55.879 "keep_alive_timeout_ms": 10000, 00:17:55.879 "low_priority_weight": 0, 00:17:55.879 "medium_priority_weight": 0, 00:17:55.879 "nvme_adminq_poll_period_us": 10000, 00:17:55.879 "nvme_error_stat": false, 00:17:55.879 "nvme_ioq_poll_period_us": 0, 00:17:55.879 "rdma_cm_event_timeout_ms": 0, 00:17:55.879 "rdma_max_cq_size": 0, 00:17:55.879 "rdma_srq_size": 0, 00:17:55.879 "reconnect_delay_sec": 0, 00:17:55.879 "timeout_admin_us": 0, 00:17:55.879 "timeout_us": 0, 00:17:55.879 "transport_ack_timeout": 0, 00:17:55.879 "transport_retry_count": 4, 00:17:55.879 "transport_tos": 0 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_nvme_attach_controller", 00:17:55.879 "params": { 00:17:55.879 "adrfam": "IPv4", 00:17:55.879 "ctrlr_loss_timeout_sec": 0, 00:17:55.879 "ddgst": false, 00:17:55.879 "fast_io_fail_timeout_sec": 0, 00:17:55.879 "hdgst": false, 00:17:55.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.879 "multipath": "multipath", 00:17:55.879 "name": "nvme0", 00:17:55.879 "prchk_guard": false, 00:17:55.879 "prchk_reftag": false, 00:17:55.879 "psk": "key0", 00:17:55.879 "reconnect_delay_sec": 0, 00:17:55.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.879 "traddr": "10.0.0.3", 00:17:55.879 "trsvcid": "4420", 00:17:55.879 "trtype": "TCP" 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_nvme_set_hotplug", 00:17:55.879 "params": { 00:17:55.879 "enable": false, 00:17:55.879 "period_us": 100000 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_enable_histogram", 00:17:55.879 "params": { 00:17:55.879 "enable": true, 00:17:55.879 "name": "nvme0n1" 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_wait_for_examine" 00:17:55.879 } 00:17:55.879 ] 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "subsystem": "nbd", 00:17:55.879 "config": [] 00:17:55.879 } 00:17:55.879 ] 00:17:55.879 }' 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84520 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84520 ']' 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84520 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84520 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:55.879 killing process with pid 84520 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84520' 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84520 00:17:55.879 Received shutdown signal, test time was about 1.000000 seconds 00:17:55.879 00:17:55.879 Latency(us) 00:17:55.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.879 =================================================================================================================== 00:17:55.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.879 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84520 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84470 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84470 ']' 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84470 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84470 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:56.138 killing process with pid 84470 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84470' 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84470 00:17:56.138 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84470 00:17:56.397 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:56.397 "subsystems": [ 00:17:56.397 { 00:17:56.397 "subsystem": "keyring", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "keyring_file_add_key", 00:17:56.397 "params": { 00:17:56.397 "name": "key0", 00:17:56.397 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:56.397 } 00:17:56.397 } 00:17:56.397 ] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "iobuf", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "iobuf_set_options", 00:17:56.397 "params": { 00:17:56.397 "large_bufsize": 135168, 00:17:56.397 "large_pool_count": 1024, 00:17:56.397 "small_bufsize": 8192, 00:17:56.397 "small_pool_count": 8192 00:17:56.397 } 00:17:56.397 } 00:17:56.397 ] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "sock", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "sock_set_default_impl", 00:17:56.397 "params": { 00:17:56.397 "impl_name": "posix" 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "sock_impl_set_options", 00:17:56.397 "params": { 00:17:56.397 "enable_ktls": false, 00:17:56.397 "enable_placement_id": 0, 00:17:56.397 "enable_quickack": false, 00:17:56.397 "enable_recv_pipe": true, 00:17:56.397 "enable_zerocopy_send_client": false, 00:17:56.397 "enable_zerocopy_send_server": true, 00:17:56.397 "impl_name": "ssl", 00:17:56.397 "recv_buf_size": 4096, 00:17:56.397 "send_buf_size": 4096, 00:17:56.397 "tls_version": 0, 00:17:56.397 "zerocopy_threshold": 0 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "sock_impl_set_options", 00:17:56.397 "params": { 00:17:56.397 "enable_ktls": false, 00:17:56.397 "enable_placement_id": 0, 00:17:56.397 "enable_quickack": false, 00:17:56.397 "enable_recv_pipe": true, 00:17:56.397 "enable_zerocopy_send_client": false, 00:17:56.397 "enable_zerocopy_send_server": true, 00:17:56.397 "impl_name": "posix", 00:17:56.397 "recv_buf_size": 2097152, 00:17:56.397 "send_buf_size": 2097152, 00:17:56.397 "tls_version": 0, 00:17:56.397 "zerocopy_threshold": 0 00:17:56.397 } 00:17:56.397 } 00:17:56.397 ] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "vmd", 00:17:56.397 "config": [] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "accel", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "accel_set_options", 00:17:56.397 "params": { 00:17:56.397 "buf_count": 2048, 00:17:56.397 "large_cache_size": 16, 00:17:56.397 "sequence_count": 2048, 00:17:56.397 "small_cache_size": 128, 00:17:56.397 "task_count": 2048 00:17:56.397 } 00:17:56.397 } 00:17:56.397 ] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "bdev", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "bdev_set_options", 00:17:56.397 "params": { 00:17:56.397 "bdev_auto_examine": true, 00:17:56.397 "bdev_io_cache_size": 256, 00:17:56.397 "bdev_io_pool_size": 65535, 00:17:56.397 "iobuf_large_cache_size": 16, 00:17:56.397 "iobuf_small_cache_size": 128 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "bdev_raid_set_options", 00:17:56.397 "params": { 00:17:56.397 "process_max_bandwidth_mb_sec": 0, 00:17:56.397 "process_window_size_kb": 1024 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "bdev_iscsi_set_options", 00:17:56.397 "params": { 00:17:56.397 "timeout_sec": 30 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "bdev_nvme_set_options", 00:17:56.397 "params": { 00:17:56.397 "action_on_timeout": "none", 00:17:56.397 "allow_accel_sequence": false, 00:17:56.397 "arbitration_burst": 0, 00:17:56.397 "bdev_retry_count": 3, 00:17:56.397 "ctrlr_loss_timeout_sec": 0, 00:17:56.397 "delay_cmd_submit": true, 00:17:56.397 "dhchap_dhgroups": [ 00:17:56.397 "null", 00:17:56.397 "ffdhe2048", 00:17:56.397 "ffdhe3072", 00:17:56.397 "ffdhe4096", 00:17:56.397 "ffdhe6144", 00:17:56.397 "ffdhe8192" 00:17:56.397 ], 00:17:56.397 "dhchap_digests": [ 00:17:56.397 "sha256", 00:17:56.397 "sha384", 00:17:56.397 "sha512" 00:17:56.397 ], 00:17:56.397 "disable_auto_failback": false, 00:17:56.397 "fast_io_fail_timeout_sec": 0, 00:17:56.397 "generate_uuids": false, 00:17:56.397 "high_priority_weight": 0, 00:17:56.397 "io_path_stat": false, 00:17:56.397 "io_queue_requests": 0, 00:17:56.397 "keep_alive_timeout_ms": 10000, 00:17:56.397 "low_priority_weight": 0, 00:17:56.397 "medium_priority_weight": 0, 00:17:56.397 "nvme_adminq_poll_period_us": 10000, 00:17:56.397 "nvme_error_stat": false, 00:17:56.397 "nvme_ioq_poll_period_us": 0, 00:17:56.397 "rdma_cm_event_timeout_ms": 0, 00:17:56.397 "rdma_max_cq_size": 0, 00:17:56.397 "rdma_srq_size": 0, 00:17:56.397 "reconnect_delay_sec": 0, 00:17:56.397 "timeout_admin_us": 0, 00:17:56.397 "timeout_us": 0, 00:17:56.397 "transport_ack_timeout": 0, 00:17:56.397 "transport_retry_count": 4, 00:17:56.397 "transport_tos": 0 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "bdev_nvme_set_hotplug", 00:17:56.397 "params": { 00:17:56.397 "enable": false, 00:17:56.397 "period_us": 100000 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "bdev_malloc_create", 00:17:56.397 "params": { 00:17:56.397 "block_size": 4096, 00:17:56.397 "dif_is_head_of_md": false, 00:17:56.397 "dif_pi_format": 0, 00:17:56.397 "dif_type": 0, 00:17:56.397 "md_size": 0, 00:17:56.397 "name": "malloc0", 00:17:56.397 "num_blocks": 8192, 00:17:56.397 "optimal_io_boundary": 0, 00:17:56.397 "physical_block_size": 4096, 00:17:56.397 "uuid": "657c2022-40cf-466d-a122-98bfa4179227" 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "bdev_wait_for_examine" 00:17:56.397 } 00:17:56.397 ] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "nbd", 00:17:56.397 "config": [] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "scheduler", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "framework_set_scheduler", 00:17:56.397 "params": { 00:17:56.397 "name": "static" 00:17:56.397 } 00:17:56.397 } 00:17:56.397 ] 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "subsystem": "nvmf", 00:17:56.397 "config": [ 00:17:56.397 { 00:17:56.397 "method": "nvmf_set_config", 00:17:56.397 "params": { 00:17:56.397 "admin_cmd_passthru": { 00:17:56.397 "identify_ctrlr": false 00:17:56.397 }, 00:17:56.397 "dhchap_dhgroups": [ 00:17:56.397 "null", 00:17:56.397 "ffdhe2048", 00:17:56.397 "ffdhe3072", 00:17:56.397 "ffdhe4096", 00:17:56.397 "ffdhe6144", 00:17:56.397 "ffdhe8192" 00:17:56.397 ], 00:17:56.397 "dhchap_digests": [ 00:17:56.397 "sha256", 00:17:56.397 "sha384", 00:17:56.397 "sha512" 00:17:56.397 ], 00:17:56.397 "discovery_filter": "match_any" 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "nvmf_set_max_subsystems", 00:17:56.397 "params": { 00:17:56.397 "max_subsystems": 1024 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "nvmf_set_crdt", 00:17:56.397 "params": { 00:17:56.397 "crdt1": 0, 00:17:56.397 "crdt2": 0, 00:17:56.397 "crdt3": 0 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "nvmf_create_transport", 00:17:56.397 "params": { 00:17:56.397 "abort_timeout_sec": 1, 00:17:56.397 "ack_timeout": 0, 00:17:56.397 "buf_cache_size": 4294967295, 00:17:56.397 "c2h_success": false, 00:17:56.397 "data_wr_pool_size": 0, 00:17:56.397 "dif_insert_or_strip": false, 00:17:56.397 "in_capsule_data_size": 4096, 00:17:56.397 "io_unit_size": 131072, 00:17:56.397 "max_aq_depth": 128, 00:17:56.397 "max_io_qpairs_per_ctrlr": 127, 00:17:56.397 "max_io_size": 131072, 00:17:56.397 "max_queue_depth": 128, 00:17:56.397 "num_shared_buffers": 511, 00:17:56.397 "sock_priority": 0, 00:17:56.397 "trtype": "TCP", 00:17:56.397 "zcopy": false 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "nvmf_create_subsystem", 00:17:56.397 "params": { 00:17:56.397 "allow_any_host": false, 00:17:56.397 "ana_reporting": false, 00:17:56.397 "max_cntlid": 65519, 00:17:56.397 "max_namespaces": 32, 00:17:56.397 "min_cntlid": 1, 00:17:56.397 "model_number": "SPDK bdev Controller", 00:17:56.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.397 "serial_number": "00000000000000000000" 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "nvmf_subsystem_add_host", 00:17:56.397 "params": { 00:17:56.397 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.397 "psk": "key0" 00:17:56.397 } 00:17:56.397 }, 00:17:56.397 { 00:17:56.397 "method": "nvmf_subsystem_add_ns", 00:17:56.397 "params": { 00:17:56.397 "namespace": { 00:17:56.397 "bdev_name": "malloc0", 00:17:56.397 "nguid": "657C202240CF466DA12298BFA4179227", 00:17:56.397 "no_auto_visible": false, 00:17:56.398 "nsid": 1, 00:17:56.398 "uuid": "657c2022-40cf-466d-a122-98bfa4179227" 00:17:56.398 }, 00:17:56.398 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:56.398 } 00:17:56.398 }, 00:17:56.398 { 00:17:56.398 "method": "nvmf_subsystem_add_listener", 00:17:56.398 "params": { 00:17:56.398 "listen_address": { 00:17:56.398 "adrfam": "IPv4", 00:17:56.398 "traddr": "10.0.0.3", 00:17:56.398 "trsvcid": "4420", 00:17:56.398 "trtype": "TCP" 00:17:56.398 }, 00:17:56.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.398 "secure_channel": false, 00:17:56.398 "sock_impl": "ssl" 00:17:56.398 } 00:17:56.398 } 00:17:56.398 ] 00:17:56.398 } 00:17:56.398 ] 00:17:56.398 }' 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84615 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84615 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84615 ']' 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.398 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.398 [2024-10-01 15:30:55.430126] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:56.398 [2024-10-01 15:30:55.430218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.656 [2024-10-01 15:30:55.569319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.656 [2024-10-01 15:30:55.656112] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.656 [2024-10-01 15:30:55.656204] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.656 [2024-10-01 15:30:55.656228] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.656 [2024-10-01 15:30:55.656249] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.656 [2024-10-01 15:30:55.656273] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.656 [2024-10-01 15:30:55.656444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.914 [2024-10-01 15:30:55.852103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.914 [2024-10-01 15:30:55.890554] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.914 [2024-10-01 15:30:55.890776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84661 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84661 /var/tmp/bdevperf.sock 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84661 ']' 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.481 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:57.481 "subsystems": [ 00:17:57.481 { 00:17:57.481 "subsystem": "keyring", 00:17:57.481 "config": [ 00:17:57.481 { 00:17:57.481 "method": "keyring_file_add_key", 00:17:57.481 "params": { 00:17:57.481 "name": "key0", 00:17:57.481 "path": "/tmp/tmp.z6DHlBUpjk" 00:17:57.481 } 00:17:57.481 } 00:17:57.481 ] 00:17:57.481 }, 00:17:57.481 { 00:17:57.481 "subsystem": "iobuf", 00:17:57.481 "config": [ 00:17:57.481 { 00:17:57.481 "method": "iobuf_set_options", 00:17:57.481 "params": { 00:17:57.481 "large_bufsize": 135168, 00:17:57.481 "large_pool_count": 1024, 00:17:57.481 "small_bufsize": 8192, 00:17:57.481 "small_pool_count": 8192 00:17:57.481 } 00:17:57.481 } 00:17:57.481 ] 00:17:57.481 }, 00:17:57.481 { 00:17:57.481 "subsystem": "sock", 00:17:57.481 "config": [ 00:17:57.481 { 00:17:57.481 "method": "sock_set_default_impl", 00:17:57.481 "params": { 00:17:57.481 "impl_name": "posix" 00:17:57.481 } 00:17:57.481 }, 00:17:57.481 { 00:17:57.481 "method": "sock_impl_set_options", 00:17:57.481 "params": { 00:17:57.481 "enable_ktls": false, 00:17:57.481 "enable_placement_id": 0, 00:17:57.481 "enable_quickack": false, 00:17:57.481 "enable_recv_pipe": true, 00:17:57.481 "enable_zerocopy_send_client": false, 00:17:57.481 "enable_zerocopy_send_server": true, 00:17:57.481 "impl_name": "ssl", 00:17:57.481 "recv_buf_size": 4096, 00:17:57.481 "send_buf_size": 4096, 00:17:57.481 "tls_version": 0, 00:17:57.482 "zerocopy_threshold": 0 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "sock_impl_set_options", 00:17:57.482 "params": { 00:17:57.482 "enable_ktls": false, 00:17:57.482 "enable_placement_id": 0, 00:17:57.482 "enable_quickack": false, 00:17:57.482 "enable_recv_pipe": true, 00:17:57.482 "enable_zerocopy_send_client": false, 00:17:57.482 "enable_zerocopy_send_server": true, 00:17:57.482 "impl_name": "posix", 00:17:57.482 "recv_buf_size": 2097152, 00:17:57.482 "send_buf_size": 2097152, 00:17:57.482 "tls_version": 0, 00:17:57.482 "zerocopy_threshold": 0 00:17:57.482 } 00:17:57.482 } 00:17:57.482 ] 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "subsystem": "vmd", 00:17:57.482 "config": [] 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "subsystem": "accel", 00:17:57.482 "config": [ 00:17:57.482 { 00:17:57.482 "method": "accel_set_options", 00:17:57.482 "params": { 00:17:57.482 "buf_count": 2048, 00:17:57.482 "large_cache_size": 16, 00:17:57.482 "sequence_count": 2048, 00:17:57.482 "small_cache_size": 128, 00:17:57.482 "task_count": 2048 00:17:57.482 } 00:17:57.482 } 00:17:57.482 ] 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "subsystem": "bdev", 00:17:57.482 "config": [ 00:17:57.482 { 00:17:57.482 "method": "bdev_set_options", 00:17:57.482 "params": { 00:17:57.482 "bdev_auto_examine": true, 00:17:57.482 "bdev_io_cache_size": 256, 00:17:57.482 "bdev_io_pool_size": 65535, 00:17:57.482 "iobuf_large_cache_size": 16, 00:17:57.482 "iobuf_small_cache_size": 128 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_raid_set_options", 00:17:57.482 "params": { 00:17:57.482 "process_max_bandwidth_mb_sec": 0, 00:17:57.482 "process_window_size_kb": 1024 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_iscsi_set_options", 00:17:57.482 "params": { 00:17:57.482 "timeout_sec": 30 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_nvme_set_options", 00:17:57.482 "params": { 00:17:57.482 "action_on_timeout": "none", 00:17:57.482 "allow_accel_sequence": false, 00:17:57.482 "arbitration_burst": 0, 00:17:57.482 "bdev_retry_count": 3, 00:17:57.482 "ctrlr_loss_timeout_sec": 0, 00:17:57.482 "delay_cmd_submit": true, 00:17:57.482 "dhchap_dhgroups": [ 00:17:57.482 "null", 00:17:57.482 "ffdhe2048", 00:17:57.482 "ffdhe3072", 00:17:57.482 "ffdhe4096", 00:17:57.482 "ffdhe6144", 00:17:57.482 "ffdhe8192" 00:17:57.482 ], 00:17:57.482 "dhchap_digests": [ 00:17:57.482 "sha256", 00:17:57.482 "sha384", 00:17:57.482 "sha512" 00:17:57.482 ], 00:17:57.482 "disable_auto_failback": false, 00:17:57.482 "fast_io_fail_timeout_sec": 0, 00:17:57.482 "generate_uuids": false, 00:17:57.482 "high_priority_weight": 0, 00:17:57.482 "io_path_stat": false, 00:17:57.482 "io_queue_requests": 512, 00:17:57.482 "keep_alive_timeout_ms": 10000, 00:17:57.482 "low_priority_weight": 0, 00:17:57.482 "medium_priority_weight": 0, 00:17:57.482 "nvme_adminq_poll_period_us": 10000, 00:17:57.482 "nvme_error_stat": false, 00:17:57.482 "nvme_ioq_poll_period_us": 0, 00:17:57.482 "rdma_cm_event_timeout_ms": 0, 00:17:57.482 "rdma_max_cq_size": 0, 00:17:57.482 "rdma_srq_size": 0, 00:17:57.482 "reconnect_delay_sec": 0, 00:17:57.482 "timeout_admin_us": 0, 00:17:57.482 "timeout_us": 0, 00:17:57.482 "transport_ack_timeout": 0, 00:17:57.482 "transport_retry_count": 4, 00:17:57.482 "transport_tos": 0 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_nvme_attach_controller", 00:17:57.482 "params": { 00:17:57.482 "adrfam": "IPv4", 00:17:57.482 "ctrlr_loss_timeout_sec": 0, 00:17:57.482 "ddgst": false, 00:17:57.482 "fast_io_fail_timeout_sec": 0, 00:17:57.482 "hdgst": false, 00:17:57.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.482 "multipath": "multipath", 00:17:57.482 "name": "nvme0", 00:17:57.482 "prchk_guard": false, 00:17:57.482 "prchk_reftag": false, 00:17:57.482 "psk": "key0", 00:17:57.482 "reconnect_delay_sec": 0, 00:17:57.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.482 "traddr": "10.0.0.3", 00:17:57.482 "trsvcid": "4420", 00:17:57.482 "trtype": "TCP" 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_nvme_set_hotplug", 00:17:57.482 "params": { 00:17:57.482 "enable": false, 00:17:57.482 "period_us": 100000 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_enable_histogram", 00:17:57.482 "params": { 00:17:57.482 "enable": true, 00:17:57.482 "name": "nvme0n1" 00:17:57.482 } 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "method": "bdev_wait_for_examine" 00:17:57.482 } 00:17:57.482 ] 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "subsystem": "nbd", 00:17:57.482 "config": [] 00:17:57.482 } 00:17:57.482 ] 00:17:57.482 }' 00:17:57.482 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.482 [2024-10-01 15:30:56.628102] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:17:57.482 [2024-10-01 15:30:56.628921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84661 ] 00:17:57.740 [2024-10-01 15:30:56.770251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.740 [2024-10-01 15:30:56.858480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.998 [2024-10-01 15:30:57.003027] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.564 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.564 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:58.564 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:58.564 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:59.130 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.130 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.130 Running I/O for 1 seconds... 00:18:00.320 3712.00 IOPS, 14.50 MiB/s 00:18:00.320 Latency(us) 00:18:00.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.320 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:00.320 Verification LBA range: start 0x0 length 0x2000 00:18:00.320 nvme0n1 : 1.03 3740.53 14.61 0.00 0.00 33842.65 11141.12 24069.59 00:18:00.320 =================================================================================================================== 00:18:00.320 Total : 3740.53 14.61 0.00 0.00 33842.65 11141.12 24069.59 00:18:00.320 { 00:18:00.320 "results": [ 00:18:00.320 { 00:18:00.320 "job": "nvme0n1", 00:18:00.320 "core_mask": "0x2", 00:18:00.320 "workload": "verify", 00:18:00.320 "status": "finished", 00:18:00.320 "verify_range": { 00:18:00.320 "start": 0, 00:18:00.320 "length": 8192 00:18:00.320 }, 00:18:00.320 "queue_depth": 128, 00:18:00.320 "io_size": 4096, 00:18:00.320 "runtime": 1.026592, 00:18:00.320 "iops": 3740.531778934572, 00:18:00.320 "mibps": 14.611452261463171, 00:18:00.320 "io_failed": 0, 00:18:00.320 "io_timeout": 0, 00:18:00.320 "avg_latency_us": 33842.649212121214, 00:18:00.320 "min_latency_us": 11141.12, 00:18:00.320 "max_latency_us": 24069.585454545453 00:18:00.320 } 00:18:00.320 ], 00:18:00.320 "core_count": 1 00:18:00.320 } 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:00.320 nvmf_trace.0 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84661 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84661 ']' 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84661 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.320 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.321 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84661 00:18:00.321 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.321 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.321 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84661' 00:18:00.321 killing process with pid 84661 00:18:00.321 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84661 00:18:00.321 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.321 00:18:00.321 Latency(us) 00:18:00.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.321 =================================================================================================================== 00:18:00.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.321 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84661 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.578 rmmod nvme_tcp 00:18:00.578 rmmod nvme_fabrics 00:18:00.578 rmmod nvme_keyring 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 84615 ']' 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 84615 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84615 ']' 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84615 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84615 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84615' 00:18:00.578 killing process with pid 84615 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84615 00:18:00.578 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84615 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.835 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.y7QzmDX9YZ /tmp/tmp.2rjyoiDZhp /tmp/tmp.z6DHlBUpjk 00:18:01.091 00:18:01.091 real 1m30.082s 00:18:01.091 user 2m29.817s 00:18:01.091 sys 0m27.306s 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.091 ************************************ 00:18:01.091 END TEST nvmf_tls 00:18:01.091 ************************************ 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.091 ************************************ 00:18:01.091 START TEST nvmf_fips 00:18:01.091 ************************************ 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.091 * Looking for test storage... 00:18:01.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:18:01.091 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:01.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.350 --rc genhtml_branch_coverage=1 00:18:01.350 --rc genhtml_function_coverage=1 00:18:01.350 --rc genhtml_legend=1 00:18:01.350 --rc geninfo_all_blocks=1 00:18:01.350 --rc geninfo_unexecuted_blocks=1 00:18:01.350 00:18:01.350 ' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:01.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.350 --rc genhtml_branch_coverage=1 00:18:01.350 --rc genhtml_function_coverage=1 00:18:01.350 --rc genhtml_legend=1 00:18:01.350 --rc geninfo_all_blocks=1 00:18:01.350 --rc geninfo_unexecuted_blocks=1 00:18:01.350 00:18:01.350 ' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:01.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.350 --rc genhtml_branch_coverage=1 00:18:01.350 --rc genhtml_function_coverage=1 00:18:01.350 --rc genhtml_legend=1 00:18:01.350 --rc geninfo_all_blocks=1 00:18:01.350 --rc geninfo_unexecuted_blocks=1 00:18:01.350 00:18:01.350 ' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:01.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.350 --rc genhtml_branch_coverage=1 00:18:01.350 --rc genhtml_function_coverage=1 00:18:01.350 --rc genhtml_legend=1 00:18:01.350 --rc geninfo_all_blocks=1 00:18:01.350 --rc geninfo_unexecuted_blocks=1 00:18:01.350 00:18:01.350 ' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.350 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.351 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:01.610 Error setting digest 00:18:01.610 401241AE4C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:01.610 401241AE4C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:01.610 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:01.611 Cannot find device "nvmf_init_br" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:01.611 Cannot find device "nvmf_init_br2" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:01.611 Cannot find device "nvmf_tgt_br" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.611 Cannot find device "nvmf_tgt_br2" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:01.611 Cannot find device "nvmf_init_br" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:01.611 Cannot find device "nvmf_init_br2" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:01.611 Cannot find device "nvmf_tgt_br" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:01.611 Cannot find device "nvmf_tgt_br2" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:01.611 Cannot find device "nvmf_br" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:01.611 Cannot find device "nvmf_init_if" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:01.611 Cannot find device "nvmf_init_if2" 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.611 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:01.869 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:01.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:18:01.870 00:18:01.870 --- 10.0.0.3 ping statistics --- 00:18:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.870 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:01.870 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:01.870 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:18:01.870 00:18:01.870 --- 10.0.0.4 ping statistics --- 00:18:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.870 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:01.870 00:18:01.870 --- 10.0.0.1 ping statistics --- 00:18:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.870 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:01.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:18:01.870 00:18:01.870 --- 10.0.0.2 ping statistics --- 00:18:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.870 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=85005 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 85005 00:18:01.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85005 ']' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.870 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:02.128 [2024-10-01 15:31:01.050552] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:02.128 [2024-10-01 15:31:01.050647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.128 [2024-10-01 15:31:01.183202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.128 [2024-10-01 15:31:01.241582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.128 [2024-10-01 15:31:01.241634] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.128 [2024-10-01 15:31:01.241647] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.128 [2024-10-01 15:31:01.241655] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.128 [2024-10-01 15:31:01.241662] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.128 [2024-10-01 15:31:01.241688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N4Q 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N4Q 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N4Q 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N4Q 00:18:02.387 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.684 [2024-10-01 15:31:01.650018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.684 [2024-10-01 15:31:01.665976] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.684 [2024-10-01 15:31:01.666206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:02.684 malloc0 00:18:02.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.684 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.684 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85045 00:18:02.684 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.684 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85045 /var/tmp/bdevperf.sock 00:18:02.685 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85045 ']' 00:18:02.685 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.685 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.685 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.685 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.685 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:02.685 [2024-10-01 15:31:01.833559] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:02.685 [2024-10-01 15:31:01.833861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85045 ] 00:18:02.943 [2024-10-01 15:31:01.965778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.943 [2024-10-01 15:31:02.025343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.943 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.943 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:02.943 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N4Q 00:18:03.508 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.766 [2024-10-01 15:31:02.777916] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.766 TLSTESTn1 00:18:03.766 15:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:04.023 Running I/O for 10 seconds... 00:18:13.895 3782.00 IOPS, 14.77 MiB/s 3672.00 IOPS, 14.34 MiB/s 3725.33 IOPS, 14.55 MiB/s 3770.00 IOPS, 14.73 MiB/s 3806.40 IOPS, 14.87 MiB/s 3792.17 IOPS, 14.81 MiB/s 3815.86 IOPS, 14.91 MiB/s 3837.62 IOPS, 14.99 MiB/s 3854.11 IOPS, 15.06 MiB/s 3867.80 IOPS, 15.11 MiB/s 00:18:13.895 Latency(us) 00:18:13.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.895 Verification LBA range: start 0x0 length 0x2000 00:18:13.895 TLSTESTn1 : 10.02 3874.03 15.13 0.00 0.00 32980.99 5391.83 38844.97 00:18:13.895 =================================================================================================================== 00:18:13.895 Total : 3874.03 15.13 0.00 0.00 32980.99 5391.83 38844.97 00:18:13.895 { 00:18:13.895 "results": [ 00:18:13.895 { 00:18:13.895 "job": "TLSTESTn1", 00:18:13.895 "core_mask": "0x4", 00:18:13.895 "workload": "verify", 00:18:13.895 "status": "finished", 00:18:13.895 "verify_range": { 00:18:13.895 "start": 0, 00:18:13.895 "length": 8192 00:18:13.895 }, 00:18:13.895 "queue_depth": 128, 00:18:13.895 "io_size": 4096, 00:18:13.895 "runtime": 10.015933, 00:18:13.895 "iops": 3874.027511965186, 00:18:13.895 "mibps": 15.132919968614008, 00:18:13.895 "io_failed": 0, 00:18:13.895 "io_timeout": 0, 00:18:13.895 "avg_latency_us": 32980.99253740435, 00:18:13.895 "min_latency_us": 5391.825454545455, 00:18:13.895 "max_latency_us": 38844.97454545455 00:18:13.895 } 00:18:13.895 ], 00:18:13.895 "core_count": 1 00:18:13.895 } 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:13.895 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:13.895 nvmf_trace.0 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85045 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85045 ']' 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85045 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85045 00:18:14.154 killing process with pid 85045 00:18:14.154 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.154 00:18:14.154 Latency(us) 00:18:14.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.154 =================================================================================================================== 00:18:14.154 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85045' 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85045 00:18:14.154 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85045 00:18:14.412 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:14.412 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:14.412 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:14.412 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.413 rmmod nvme_tcp 00:18:14.413 rmmod nvme_fabrics 00:18:14.413 rmmod nvme_keyring 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 85005 ']' 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 85005 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85005 ']' 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85005 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85005 00:18:14.413 killing process with pid 85005 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85005' 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85005 00:18:14.413 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85005 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:14.671 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N4Q 00:18:14.930 ************************************ 00:18:14.930 END TEST nvmf_fips 00:18:14.930 ************************************ 00:18:14.930 00:18:14.930 real 0m13.739s 00:18:14.930 user 0m18.990s 00:18:14.930 sys 0m5.633s 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:14.930 ************************************ 00:18:14.930 START TEST nvmf_control_msg_list 00:18:14.930 ************************************ 00:18:14.930 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:14.930 * Looking for test storage... 00:18:14.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:14.930 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:14.930 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:18:14.930 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.189 --rc genhtml_branch_coverage=1 00:18:15.189 --rc genhtml_function_coverage=1 00:18:15.189 --rc genhtml_legend=1 00:18:15.189 --rc geninfo_all_blocks=1 00:18:15.189 --rc geninfo_unexecuted_blocks=1 00:18:15.189 00:18:15.189 ' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.189 --rc genhtml_branch_coverage=1 00:18:15.189 --rc genhtml_function_coverage=1 00:18:15.189 --rc genhtml_legend=1 00:18:15.189 --rc geninfo_all_blocks=1 00:18:15.189 --rc geninfo_unexecuted_blocks=1 00:18:15.189 00:18:15.189 ' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.189 --rc genhtml_branch_coverage=1 00:18:15.189 --rc genhtml_function_coverage=1 00:18:15.189 --rc genhtml_legend=1 00:18:15.189 --rc geninfo_all_blocks=1 00:18:15.189 --rc geninfo_unexecuted_blocks=1 00:18:15.189 00:18:15.189 ' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:15.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.189 --rc genhtml_branch_coverage=1 00:18:15.189 --rc genhtml_function_coverage=1 00:18:15.189 --rc genhtml_legend=1 00:18:15.189 --rc geninfo_all_blocks=1 00:18:15.189 --rc geninfo_unexecuted_blocks=1 00:18:15.189 00:18:15.189 ' 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.189 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:15.190 Cannot find device "nvmf_init_br" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:15.190 Cannot find device "nvmf_init_br2" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:15.190 Cannot find device "nvmf_tgt_br" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.190 Cannot find device "nvmf_tgt_br2" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:15.190 Cannot find device "nvmf_init_br" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:15.190 Cannot find device "nvmf_init_br2" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:15.190 Cannot find device "nvmf_tgt_br" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:15.190 Cannot find device "nvmf_tgt_br2" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:15.190 Cannot find device "nvmf_br" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:15.190 Cannot find device "nvmf_init_if" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:15.190 Cannot find device "nvmf_init_if2" 00:18:15.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.191 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:15.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:18:15.449 00:18:15.449 --- 10.0.0.3 ping statistics --- 00:18:15.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.449 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:15.449 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:15.449 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:15.449 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:18:15.449 00:18:15.449 --- 10.0.0.4 ping statistics --- 00:18:15.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.450 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:15.450 00:18:15.450 --- 10.0.0.1 ping statistics --- 00:18:15.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.450 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:15.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:18:15.450 00:18:15.450 --- 10.0.0.2 ping statistics --- 00:18:15.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.450 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:15.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=85444 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 85444 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 85444 ']' 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.450 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:15.708 [2024-10-01 15:31:14.683690] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:15.708 [2024-10-01 15:31:14.684170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.708 [2024-10-01 15:31:14.828163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.966 [2024-10-01 15:31:14.897956] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.966 [2024-10-01 15:31:14.898226] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.966 [2024-10-01 15:31:14.898414] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.966 [2024-10-01 15:31:14.898609] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.966 [2024-10-01 15:31:14.898624] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.966 [2024-10-01 15:31:14.898660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 [2024-10-01 15:31:15.813964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 Malloc0 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:16.898 [2024-10-01 15:31:15.849258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85495 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85496 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85497 00:18:16.898 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85495 00:18:16.899 15:31:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:16.899 [2024-10-01 15:31:16.027768] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:16.899 [2024-10-01 15:31:16.028055] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:16.899 [2024-10-01 15:31:16.047857] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:18.270 Initializing NVMe Controllers 00:18:18.270 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:18.270 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:18.270 Initialization complete. Launching workers. 00:18:18.270 ======================================================== 00:18:18.270 Latency(us) 00:18:18.270 Device Information : IOPS MiB/s Average min max 00:18:18.270 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2662.00 10.40 375.19 176.88 1372.64 00:18:18.270 ======================================================== 00:18:18.270 Total : 2662.00 10.40 375.19 176.88 1372.64 00:18:18.270 00:18:18.270 Initializing NVMe Controllers 00:18:18.270 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:18.270 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:18.270 Initialization complete. Launching workers. 00:18:18.270 ======================================================== 00:18:18.270 Latency(us) 00:18:18.270 Device Information : IOPS MiB/s Average min max 00:18:18.270 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2641.00 10.32 378.20 222.61 1377.22 00:18:18.271 ======================================================== 00:18:18.271 Total : 2641.00 10.32 378.20 222.61 1377.22 00:18:18.271 00:18:18.271 Initializing NVMe Controllers 00:18:18.271 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:18.271 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:18.271 Initialization complete. Launching workers. 00:18:18.271 ======================================================== 00:18:18.271 Latency(us) 00:18:18.271 Device Information : IOPS MiB/s Average min max 00:18:18.271 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2664.00 10.41 374.87 134.81 1377.38 00:18:18.271 ======================================================== 00:18:18.271 Total : 2664.00 10.41 374.87 134.81 1377.38 00:18:18.271 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85496 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85497 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.271 rmmod nvme_tcp 00:18:18.271 rmmod nvme_fabrics 00:18:18.271 rmmod nvme_keyring 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 85444 ']' 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 85444 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 85444 ']' 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 85444 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85444 00:18:18.271 killing process with pid 85444 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85444' 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 85444 00:18:18.271 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 85444 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.529 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:18.788 00:18:18.788 real 0m3.766s 00:18:18.788 user 0m5.828s 00:18:18.788 sys 0m1.399s 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.788 ************************************ 00:18:18.788 END TEST nvmf_control_msg_list 00:18:18.788 ************************************ 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 ************************************ 00:18:18.788 START TEST nvmf_wait_for_buf 00:18:18.788 ************************************ 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:18.788 * Looking for test storage... 00:18:18.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:18.788 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.789 --rc genhtml_branch_coverage=1 00:18:18.789 --rc genhtml_function_coverage=1 00:18:18.789 --rc genhtml_legend=1 00:18:18.789 --rc geninfo_all_blocks=1 00:18:18.789 --rc geninfo_unexecuted_blocks=1 00:18:18.789 00:18:18.789 ' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.789 --rc genhtml_branch_coverage=1 00:18:18.789 --rc genhtml_function_coverage=1 00:18:18.789 --rc genhtml_legend=1 00:18:18.789 --rc geninfo_all_blocks=1 00:18:18.789 --rc geninfo_unexecuted_blocks=1 00:18:18.789 00:18:18.789 ' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.789 --rc genhtml_branch_coverage=1 00:18:18.789 --rc genhtml_function_coverage=1 00:18:18.789 --rc genhtml_legend=1 00:18:18.789 --rc geninfo_all_blocks=1 00:18:18.789 --rc geninfo_unexecuted_blocks=1 00:18:18.789 00:18:18.789 ' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:18.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.789 --rc genhtml_branch_coverage=1 00:18:18.789 --rc genhtml_function_coverage=1 00:18:18.789 --rc genhtml_legend=1 00:18:18.789 --rc geninfo_all_blocks=1 00:18:18.789 --rc geninfo_unexecuted_blocks=1 00:18:18.789 00:18:18.789 ' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.789 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:18.789 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.790 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:19.048 Cannot find device "nvmf_init_br" 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:19.048 Cannot find device "nvmf_init_br2" 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:19.048 Cannot find device "nvmf_tgt_br" 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.048 Cannot find device "nvmf_tgt_br2" 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:19.048 Cannot find device "nvmf_init_br" 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:19.048 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:19.048 Cannot find device "nvmf_init_br2" 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:19.048 Cannot find device "nvmf_tgt_br" 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:19.048 Cannot find device "nvmf_tgt_br2" 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:19.048 Cannot find device "nvmf_br" 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:19.048 Cannot find device "nvmf_init_if" 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:19.048 Cannot find device "nvmf_init_if2" 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.048 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:19.049 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:19.307 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:19.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:18:19.308 00:18:19.308 --- 10.0.0.3 ping statistics --- 00:18:19.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.308 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:19.308 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:19.308 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:18:19.308 00:18:19.308 --- 10.0.0.4 ping statistics --- 00:18:19.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.308 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:19.308 00:18:19.308 --- 10.0.0.1 ping statistics --- 00:18:19.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.308 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:19.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:18:19.308 00:18:19.308 --- 10.0.0.2 ping statistics --- 00:18:19.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.308 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=85736 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 85736 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85736 ']' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.308 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.308 [2024-10-01 15:31:18.468328] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:19.308 [2024-10-01 15:31:18.468447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.566 [2024-10-01 15:31:18.600242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.566 [2024-10-01 15:31:18.658711] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.566 [2024-10-01 15:31:18.659004] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.566 [2024-10-01 15:31:18.659025] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.566 [2024-10-01 15:31:18.659034] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.566 [2024-10-01 15:31:18.659042] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.566 [2024-10-01 15:31:18.659072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.566 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.566 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:18:19.566 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:19.566 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.566 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 Malloc0 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 [2024-10-01 15:31:18.830063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 [2024-10-01 15:31:18.854187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.824 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.081 [2024-10-01 15:31:19.040626] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:21.456 Initializing NVMe Controllers 00:18:21.456 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:21.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:21.456 Initialization complete. Launching workers. 00:18:21.456 ======================================================== 00:18:21.456 Latency(us) 00:18:21.456 Device Information : IOPS MiB/s Average min max 00:18:21.456 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.99 16.00 32623.50 8031.69 64012.41 00:18:21.456 ======================================================== 00:18:21.456 Total : 127.99 16.00 32623.50 8031.69 64012.41 00:18:21.456 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.456 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:21.456 rmmod nvme_tcp 00:18:21.456 rmmod nvme_fabrics 00:18:21.456 rmmod nvme_keyring 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 85736 ']' 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 85736 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85736 ']' 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85736 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85736 00:18:21.457 killing process with pid 85736 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85736' 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85736 00:18:21.457 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85736 00:18:21.715 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:21.715 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:21.715 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:21.716 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:21.974 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:21.974 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:21.974 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:21.975 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:21.975 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:21.975 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.975 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.975 15:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:21.975 00:18:21.975 real 0m3.277s 00:18:21.975 user 0m2.673s 00:18:21.975 sys 0m0.696s 00:18:21.975 ************************************ 00:18:21.975 END TEST nvmf_wait_for_buf 00:18:21.975 ************************************ 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:21.975 ************************************ 00:18:21.975 END TEST nvmf_target_extra 00:18:21.975 ************************************ 00:18:21.975 00:18:21.975 real 7m36.028s 00:18:21.975 user 18m27.823s 00:18:21.975 sys 1m24.646s 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.975 15:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.975 15:31:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:21.975 15:31:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:21.975 15:31:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.975 15:31:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.975 ************************************ 00:18:21.975 START TEST nvmf_host 00:18:21.975 ************************************ 00:18:21.975 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:22.233 * Looking for test storage... 00:18:22.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.233 --rc genhtml_branch_coverage=1 00:18:22.233 --rc genhtml_function_coverage=1 00:18:22.233 --rc genhtml_legend=1 00:18:22.233 --rc geninfo_all_blocks=1 00:18:22.233 --rc geninfo_unexecuted_blocks=1 00:18:22.233 00:18:22.233 ' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.233 --rc genhtml_branch_coverage=1 00:18:22.233 --rc genhtml_function_coverage=1 00:18:22.233 --rc genhtml_legend=1 00:18:22.233 --rc geninfo_all_blocks=1 00:18:22.233 --rc geninfo_unexecuted_blocks=1 00:18:22.233 00:18:22.233 ' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.233 --rc genhtml_branch_coverage=1 00:18:22.233 --rc genhtml_function_coverage=1 00:18:22.233 --rc genhtml_legend=1 00:18:22.233 --rc geninfo_all_blocks=1 00:18:22.233 --rc geninfo_unexecuted_blocks=1 00:18:22.233 00:18:22.233 ' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.233 --rc genhtml_branch_coverage=1 00:18:22.233 --rc genhtml_function_coverage=1 00:18:22.233 --rc genhtml_legend=1 00:18:22.233 --rc geninfo_all_blocks=1 00:18:22.233 --rc geninfo_unexecuted_blocks=1 00:18:22.233 00:18:22.233 ' 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.233 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.234 ************************************ 00:18:22.234 START TEST nvmf_multicontroller 00:18:22.234 ************************************ 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:18:22.234 * Looking for test storage... 00:18:22.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:18:22.234 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:22.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.493 --rc genhtml_branch_coverage=1 00:18:22.493 --rc genhtml_function_coverage=1 00:18:22.493 --rc genhtml_legend=1 00:18:22.493 --rc geninfo_all_blocks=1 00:18:22.493 --rc geninfo_unexecuted_blocks=1 00:18:22.493 00:18:22.493 ' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:22.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.493 --rc genhtml_branch_coverage=1 00:18:22.493 --rc genhtml_function_coverage=1 00:18:22.493 --rc genhtml_legend=1 00:18:22.493 --rc geninfo_all_blocks=1 00:18:22.493 --rc geninfo_unexecuted_blocks=1 00:18:22.493 00:18:22.493 ' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:22.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.493 --rc genhtml_branch_coverage=1 00:18:22.493 --rc genhtml_function_coverage=1 00:18:22.493 --rc genhtml_legend=1 00:18:22.493 --rc geninfo_all_blocks=1 00:18:22.493 --rc geninfo_unexecuted_blocks=1 00:18:22.493 00:18:22.493 ' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:22.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.493 --rc genhtml_branch_coverage=1 00:18:22.493 --rc genhtml_function_coverage=1 00:18:22.493 --rc genhtml_legend=1 00:18:22.493 --rc geninfo_all_blocks=1 00:18:22.493 --rc geninfo_unexecuted_blocks=1 00:18:22.493 00:18:22.493 ' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.493 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:22.494 Cannot find device "nvmf_init_br" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:22.494 Cannot find device "nvmf_init_br2" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:22.494 Cannot find device "nvmf_tgt_br" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.494 Cannot find device "nvmf_tgt_br2" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:22.494 Cannot find device "nvmf_init_br" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:22.494 Cannot find device "nvmf_init_br2" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:22.494 Cannot find device "nvmf_tgt_br" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:22.494 Cannot find device "nvmf_tgt_br2" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:22.494 Cannot find device "nvmf_br" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:22.494 Cannot find device "nvmf_init_if" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:22.494 Cannot find device "nvmf_init_if2" 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.494 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:22.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:18:22.752 00:18:22.752 --- 10.0.0.3 ping statistics --- 00:18:22.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.752 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:22.752 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:22.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:22.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:18:22.752 00:18:22.752 --- 10.0.0.4 ping statistics --- 00:18:22.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.753 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:22.753 00:18:22.753 --- 10.0.0.1 ping statistics --- 00:18:22.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.753 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:22.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:22.753 00:18:22.753 --- 10.0.0.2 ping statistics --- 00:18:22.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.753 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@457 -- # return 0 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:22.753 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=86060 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 86060 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 86060 ']' 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.011 15:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.011 [2024-10-01 15:31:21.994563] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:23.011 [2024-10-01 15:31:21.994662] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.011 [2024-10-01 15:31:22.131844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:23.269 [2024-10-01 15:31:22.192000] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.269 [2024-10-01 15:31:22.192065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.269 [2024-10-01 15:31:22.192078] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.269 [2024-10-01 15:31:22.192087] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.269 [2024-10-01 15:31:22.192094] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.269 [2024-10-01 15:31:22.192288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.269 [2024-10-01 15:31:22.192500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:23.269 [2024-10-01 15:31:22.192509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.269 [2024-10-01 15:31:22.316301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.269 Malloc0 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:23.269 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 [2024-10-01 15:31:22.362857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 [2024-10-01 15:31:22.370815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 Malloc1 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86099 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86099 /var/tmp/bdevperf.sock 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 86099 ']' 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.270 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.835 NVMe0n1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.835 1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.835 2024/10/01 15:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:18:23.835 request: 00:18:23.835 { 00:18:23.835 "method": "bdev_nvme_attach_controller", 00:18:23.835 "params": { 00:18:23.835 "name": "NVMe0", 00:18:23.835 "trtype": "tcp", 00:18:23.835 "traddr": "10.0.0.3", 00:18:23.835 "adrfam": "ipv4", 00:18:23.835 "trsvcid": "4420", 00:18:23.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.835 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:18:23.835 "hostaddr": "10.0.0.1", 00:18:23.835 "prchk_reftag": false, 00:18:23.835 "prchk_guard": false, 00:18:23.835 "hdgst": false, 00:18:23.835 "ddgst": false, 00:18:23.835 "allow_unrecognized_csi": false 00:18:23.835 } 00:18:23.835 } 00:18:23.835 Got JSON-RPC error response 00:18:23.835 GoRPCClient: error on JSON-RPC call 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.835 2024/10/01 15:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:18:23.835 request: 00:18:23.835 { 00:18:23.835 "method": "bdev_nvme_attach_controller", 00:18:23.835 "params": { 00:18:23.835 "name": "NVMe0", 00:18:23.835 "trtype": "tcp", 00:18:23.835 "traddr": "10.0.0.3", 00:18:23.835 "adrfam": "ipv4", 00:18:23.835 "trsvcid": "4420", 00:18:23.835 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:23.835 "hostaddr": "10.0.0.1", 00:18:23.835 "prchk_reftag": false, 00:18:23.835 "prchk_guard": false, 00:18:23.835 "hdgst": false, 00:18:23.835 "ddgst": false, 00:18:23.835 "allow_unrecognized_csi": false 00:18:23.835 } 00:18:23.835 } 00:18:23.835 Got JSON-RPC error response 00:18:23.835 GoRPCClient: error on JSON-RPC call 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:23.835 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.836 2024/10/01 15:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:18:23.836 request: 00:18:23.836 { 00:18:23.836 "method": "bdev_nvme_attach_controller", 00:18:23.836 "params": { 00:18:23.836 "name": "NVMe0", 00:18:23.836 "trtype": "tcp", 00:18:23.836 "traddr": "10.0.0.3", 00:18:23.836 "adrfam": "ipv4", 00:18:23.836 "trsvcid": "4420", 00:18:23.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.836 "hostaddr": "10.0.0.1", 00:18:23.836 "prchk_reftag": false, 00:18:23.836 "prchk_guard": false, 00:18:23.836 "hdgst": false, 00:18:23.836 "ddgst": false, 00:18:23.836 "multipath": "disable", 00:18:23.836 "allow_unrecognized_csi": false 00:18:23.836 } 00:18:23.836 } 00:18:23.836 Got JSON-RPC error response 00:18:23.836 GoRPCClient: error on JSON-RPC call 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:23.836 2024/10/01 15:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:18:23.836 request: 00:18:23.836 { 00:18:23.836 "method": "bdev_nvme_attach_controller", 00:18:23.836 "params": { 00:18:23.836 "name": "NVMe0", 00:18:23.836 "trtype": "tcp", 00:18:23.836 "traddr": "10.0.0.3", 00:18:23.836 "adrfam": "ipv4", 00:18:23.836 "trsvcid": "4420", 00:18:23.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.836 "hostaddr": "10.0.0.1", 00:18:23.836 "prchk_reftag": false, 00:18:23.836 "prchk_guard": false, 00:18:23.836 "hdgst": false, 00:18:23.836 "ddgst": false, 00:18:23.836 "multipath": "failover", 00:18:23.836 "allow_unrecognized_csi": false 00:18:23.836 } 00:18:23.836 } 00:18:23.836 Got JSON-RPC error response 00:18:23.836 GoRPCClient: error on JSON-RPC call 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.836 15:31:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:24.093 NVMe0n1 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:24.093 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:18:24.093 15:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.468 { 00:18:25.468 "results": [ 00:18:25.468 { 00:18:25.468 "job": "NVMe0n1", 00:18:25.468 "core_mask": "0x1", 00:18:25.468 "workload": "write", 00:18:25.468 "status": "finished", 00:18:25.468 "queue_depth": 128, 00:18:25.468 "io_size": 4096, 00:18:25.468 "runtime": 1.007284, 00:18:25.468 "iops": 18417.84442123572, 00:18:25.468 "mibps": 71.94470477045203, 00:18:25.468 "io_failed": 0, 00:18:25.468 "io_timeout": 0, 00:18:25.468 "avg_latency_us": 6938.977095142891, 00:18:25.468 "min_latency_us": 3142.7490909090907, 00:18:25.468 "max_latency_us": 15371.17090909091 00:18:25.468 } 00:18:25.468 ], 00:18:25.468 "core_count": 1 00:18:25.468 } 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.468 nvme1n1 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.468 nvme1n1 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86099 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 86099 ']' 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 86099 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86099 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.468 killing process with pid 86099 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86099' 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 86099 00:18:25.468 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 86099 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:18:25.726 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:18:25.726 [2024-10-01 15:31:22.471694] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:25.726 [2024-10-01 15:31:22.471800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86099 ] 00:18:25.726 [2024-10-01 15:31:22.604118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.726 [2024-10-01 15:31:22.675300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.726 [2024-10-01 15:31:23.091268] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 6b0bc586-9fd1-4e7d-82ec-cbacbf53360b already exists 00:18:25.726 [2024-10-01 15:31:23.091379] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:6b0bc586-9fd1-4e7d-82ec-cbacbf53360b alias for bdev NVMe1n1 00:18:25.726 [2024-10-01 15:31:23.091414] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:18:25.726 Running I/O for 1 seconds... 00:18:25.726 18393.00 IOPS, 71.85 MiB/s 00:18:25.726 Latency(us) 00:18:25.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.726 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:18:25.726 NVMe0n1 : 1.01 18417.84 71.94 0.00 0.00 6938.98 3142.75 15371.17 00:18:25.726 =================================================================================================================== 00:18:25.726 Total : 18417.84 71.94 0.00 0.00 6938.98 3142.75 15371.17 00:18:25.726 Received shutdown signal, test time was about 1.000000 seconds 00:18:25.726 00:18:25.726 Latency(us) 00:18:25.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.726 =================================================================================================================== 00:18:25.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.726 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:25.726 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:25.727 rmmod nvme_tcp 00:18:25.727 rmmod nvme_fabrics 00:18:25.727 rmmod nvme_keyring 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 86060 ']' 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 86060 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 86060 ']' 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 86060 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.727 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86060 00:18:25.985 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:25.985 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:25.985 killing process with pid 86060 00:18:25.985 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86060' 00:18:25.985 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 86060 00:18:25.985 15:31:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 86060 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:25.985 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:18:26.243 00:18:26.243 real 0m4.008s 00:18:26.243 user 0m11.470s 00:18:26.243 sys 0m1.022s 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.243 ************************************ 00:18:26.243 END TEST nvmf_multicontroller 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:26.243 ************************************ 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.243 ************************************ 00:18:26.243 START TEST nvmf_aer 00:18:26.243 ************************************ 00:18:26.243 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:18:26.501 * Looking for test storage... 00:18:26.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.502 --rc genhtml_branch_coverage=1 00:18:26.502 --rc genhtml_function_coverage=1 00:18:26.502 --rc genhtml_legend=1 00:18:26.502 --rc geninfo_all_blocks=1 00:18:26.502 --rc geninfo_unexecuted_blocks=1 00:18:26.502 00:18:26.502 ' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.502 --rc genhtml_branch_coverage=1 00:18:26.502 --rc genhtml_function_coverage=1 00:18:26.502 --rc genhtml_legend=1 00:18:26.502 --rc geninfo_all_blocks=1 00:18:26.502 --rc geninfo_unexecuted_blocks=1 00:18:26.502 00:18:26.502 ' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.502 --rc genhtml_branch_coverage=1 00:18:26.502 --rc genhtml_function_coverage=1 00:18:26.502 --rc genhtml_legend=1 00:18:26.502 --rc geninfo_all_blocks=1 00:18:26.502 --rc geninfo_unexecuted_blocks=1 00:18:26.502 00:18:26.502 ' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.502 --rc genhtml_branch_coverage=1 00:18:26.502 --rc genhtml_function_coverage=1 00:18:26.502 --rc genhtml_legend=1 00:18:26.502 --rc geninfo_all_blocks=1 00:18:26.502 --rc geninfo_unexecuted_blocks=1 00:18:26.502 00:18:26.502 ' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.502 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:26.502 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:26.503 Cannot find device "nvmf_init_br" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:26.503 Cannot find device "nvmf_init_br2" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:26.503 Cannot find device "nvmf_tgt_br" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.503 Cannot find device "nvmf_tgt_br2" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:26.503 Cannot find device "nvmf_init_br" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:26.503 Cannot find device "nvmf_init_br2" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:26.503 Cannot find device "nvmf_tgt_br" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:26.503 Cannot find device "nvmf_tgt_br2" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:26.503 Cannot find device "nvmf_br" 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:18:26.503 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:26.761 Cannot find device "nvmf_init_if" 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:26.761 Cannot find device "nvmf_init_if2" 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:26.761 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.762 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:27.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:27.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:18:27.020 00:18:27.020 --- 10.0.0.3 ping statistics --- 00:18:27.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.020 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:27.020 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:27.020 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:27.020 00:18:27.020 --- 10.0.0.4 ping statistics --- 00:18:27.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.020 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:27.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:27.020 00:18:27.020 --- 10.0.0.1 ping statistics --- 00:18:27.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.020 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:27.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:27.020 00:18:27.020 --- 10.0.0.2 ping statistics --- 00:18:27.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.020 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # return 0 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=86368 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 86368 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 86368 ']' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.020 15:31:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 [2024-10-01 15:31:26.052123] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:27.020 [2024-10-01 15:31:26.052220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.278 [2024-10-01 15:31:26.192671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.278 [2024-10-01 15:31:26.271411] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.278 [2024-10-01 15:31:26.271493] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.278 [2024-10-01 15:31:26.271509] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.278 [2024-10-01 15:31:26.271519] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.278 [2024-10-01 15:31:26.271527] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.278 [2024-10-01 15:31:26.272212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.278 [2024-10-01 15:31:26.272303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.278 [2024-10-01 15:31:26.272397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.278 [2024-10-01 15:31:26.272402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 [2024-10-01 15:31:27.206622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 Malloc0 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 [2024-10-01 15:31:27.254920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.213 [ 00:18:28.213 { 00:18:28.213 "allow_any_host": true, 00:18:28.213 "hosts": [], 00:18:28.213 "listen_addresses": [], 00:18:28.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:28.213 "subtype": "Discovery" 00:18:28.213 }, 00:18:28.213 { 00:18:28.213 "allow_any_host": true, 00:18:28.213 "hosts": [], 00:18:28.213 "listen_addresses": [ 00:18:28.213 { 00:18:28.213 "adrfam": "IPv4", 00:18:28.213 "traddr": "10.0.0.3", 00:18:28.213 "trsvcid": "4420", 00:18:28.213 "trtype": "TCP" 00:18:28.213 } 00:18:28.213 ], 00:18:28.213 "max_cntlid": 65519, 00:18:28.213 "max_namespaces": 2, 00:18:28.213 "min_cntlid": 1, 00:18:28.213 "model_number": "SPDK bdev Controller", 00:18:28.213 "namespaces": [ 00:18:28.213 { 00:18:28.213 "bdev_name": "Malloc0", 00:18:28.213 "name": "Malloc0", 00:18:28.213 "nguid": "56F93459A6054EE4B9DF3912B5539FFC", 00:18:28.213 "nsid": 1, 00:18:28.213 "uuid": "56f93459-a605-4ee4-b9df-3912b5539ffc" 00:18:28.213 } 00:18:28.213 ], 00:18:28.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.213 "serial_number": "SPDK00000000000001", 00:18:28.213 "subtype": "NVMe" 00:18:28.213 } 00:18:28.213 ] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86422 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:18:28.213 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 Malloc1 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 [ 00:18:28.471 { 00:18:28.471 "allow_any_host": true, 00:18:28.471 "hosts": [], 00:18:28.471 "listen_addresses": [], 00:18:28.471 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:28.471 "subtype": "Discovery" 00:18:28.471 }, 00:18:28.471 { 00:18:28.471 "allow_any_host": true, 00:18:28.471 "hosts": [], 00:18:28.471 "listen_addresses": [ 00:18:28.471 { 00:18:28.471 "adrfam": "IPv4", 00:18:28.471 "traddr": "10.0.0.3", 00:18:28.471 "trsvcid": "4420", 00:18:28.471 "trtype": "TCP" 00:18:28.471 } 00:18:28.471 ], 00:18:28.471 "max_cntlid": 65519, 00:18:28.471 "max_namespaces": 2, 00:18:28.471 "min_cntlid": 1, 00:18:28.471 "model_number": "SPDK bdev Controller", 00:18:28.471 "namespaces": [ 00:18:28.471 { 00:18:28.471 "bdev_name": "Malloc0", 00:18:28.471 "name": "Malloc0", 00:18:28.471 "nguid": "56F93459A6054EE4B9DF3912B5539FFC", 00:18:28.471 "nsid": 1, 00:18:28.471 "uuid": "56f93459-a605-4ee4-b9df-3912b5539ffc" 00:18:28.471 }, 00:18:28.471 { 00:18:28.471 "bdev_name": "Malloc1", 00:18:28.471 "name": "Malloc1", 00:18:28.471 "nguid": "283095C65C204BE9B77B409671DA5EDC", 00:18:28.471 "nsid": 2, 00:18:28.471 "uuid": "283095c6-5c20-4be9-b77b-409671da5edc" 00:18:28.471 } 00:18:28.471 ], 00:18:28.471 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.471 "serial_number": "SPDK00000000000001", 00:18:28.471 "subtype": "NVMe" 00:18:28.471 } 00:18:28.471 ] 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86422 00:18:28.471 Asynchronous Event Request test 00:18:28.471 Attaching to 10.0.0.3 00:18:28.471 Attached to 10.0.0.3 00:18:28.471 Registering asynchronous event callbacks... 00:18:28.471 Starting namespace attribute notice tests for all controllers... 00:18:28.471 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:28.471 aer_cb - Changed Namespace 00:18:28.471 Cleaning up... 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.472 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:28.472 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:28.472 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:28.472 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.729 rmmod nvme_tcp 00:18:28.729 rmmod nvme_fabrics 00:18:28.729 rmmod nvme_keyring 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 86368 ']' 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 86368 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 86368 ']' 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 86368 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86368 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.729 killing process with pid 86368 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86368' 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 86368 00:18:28.729 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 86368 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:28.987 15:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.987 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.245 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:18:29.245 00:18:29.245 real 0m2.822s 00:18:29.245 user 0m7.165s 00:18:29.245 sys 0m0.721s 00:18:29.245 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.245 15:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:29.245 ************************************ 00:18:29.245 END TEST nvmf_aer 00:18:29.245 ************************************ 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.246 ************************************ 00:18:29.246 START TEST nvmf_async_init 00:18:29.246 ************************************ 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:18:29.246 * Looking for test storage... 00:18:29.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.246 --rc genhtml_branch_coverage=1 00:18:29.246 --rc genhtml_function_coverage=1 00:18:29.246 --rc genhtml_legend=1 00:18:29.246 --rc geninfo_all_blocks=1 00:18:29.246 --rc geninfo_unexecuted_blocks=1 00:18:29.246 00:18:29.246 ' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.246 --rc genhtml_branch_coverage=1 00:18:29.246 --rc genhtml_function_coverage=1 00:18:29.246 --rc genhtml_legend=1 00:18:29.246 --rc geninfo_all_blocks=1 00:18:29.246 --rc geninfo_unexecuted_blocks=1 00:18:29.246 00:18:29.246 ' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.246 --rc genhtml_branch_coverage=1 00:18:29.246 --rc genhtml_function_coverage=1 00:18:29.246 --rc genhtml_legend=1 00:18:29.246 --rc geninfo_all_blocks=1 00:18:29.246 --rc geninfo_unexecuted_blocks=1 00:18:29.246 00:18:29.246 ' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:29.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.246 --rc genhtml_branch_coverage=1 00:18:29.246 --rc genhtml_function_coverage=1 00:18:29.246 --rc genhtml_legend=1 00:18:29.246 --rc geninfo_all_blocks=1 00:18:29.246 --rc geninfo_unexecuted_blocks=1 00:18:29.246 00:18:29.246 ' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.246 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:29.247 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:29.247 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:29.512 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=811a0554adfa425d899595f8e30d4ad4 00:18:29.512 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:29.512 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:29.512 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:29.513 Cannot find device "nvmf_init_br" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:29.513 Cannot find device "nvmf_init_br2" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:29.513 Cannot find device "nvmf_tgt_br" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:29.513 Cannot find device "nvmf_tgt_br2" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:29.513 Cannot find device "nvmf_init_br" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:29.513 Cannot find device "nvmf_init_br2" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:29.513 Cannot find device "nvmf_tgt_br" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:29.513 Cannot find device "nvmf_tgt_br2" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:29.513 Cannot find device "nvmf_br" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:29.513 Cannot find device "nvmf_init_if" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:29.513 Cannot find device "nvmf_init_if2" 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:29.513 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:29.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:29.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:18:29.771 00:18:29.771 --- 10.0.0.3 ping statistics --- 00:18:29.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.771 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:29.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:29.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:29.771 00:18:29.771 --- 10.0.0.4 ping statistics --- 00:18:29.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.771 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:29.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:18:29.771 00:18:29.771 --- 10.0.0.1 ping statistics --- 00:18:29.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.771 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:29.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:18:29.771 00:18:29.771 --- 10.0.0.2 ping statistics --- 00:18:29.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.771 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # return 0 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=86648 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 86648 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 86648 ']' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.771 15:31:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:29.771 [2024-10-01 15:31:28.902479] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:29.771 [2024-10-01 15:31:28.902575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.028 [2024-10-01 15:31:29.063743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.028 [2024-10-01 15:31:29.149586] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.028 [2024-10-01 15:31:29.149653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.028 [2024-10-01 15:31:29.149668] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.028 [2024-10-01 15:31:29.149680] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.028 [2024-10-01 15:31:29.149691] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.028 [2024-10-01 15:31:29.149728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.287 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.287 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:18:30.287 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:30.287 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:30.287 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 [2024-10-01 15:31:29.288208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 null0 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 811a0554adfa425d899595f8e30d4ad4 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.288 [2024-10-01 15:31:29.328339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.288 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.546 nvme0n1 00:18:30.546 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.546 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:30.546 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.546 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.546 [ 00:18:30.546 { 00:18:30.546 "aliases": [ 00:18:30.546 "811a0554-adfa-425d-8995-95f8e30d4ad4" 00:18:30.546 ], 00:18:30.546 "assigned_rate_limits": { 00:18:30.546 "r_mbytes_per_sec": 0, 00:18:30.546 "rw_ios_per_sec": 0, 00:18:30.546 "rw_mbytes_per_sec": 0, 00:18:30.546 "w_mbytes_per_sec": 0 00:18:30.546 }, 00:18:30.546 "block_size": 512, 00:18:30.546 "claimed": false, 00:18:30.546 "driver_specific": { 00:18:30.546 "mp_policy": "active_passive", 00:18:30.546 "nvme": [ 00:18:30.546 { 00:18:30.546 "ctrlr_data": { 00:18:30.546 "ana_reporting": false, 00:18:30.546 "cntlid": 1, 00:18:30.546 "firmware_revision": "25.01", 00:18:30.546 "model_number": "SPDK bdev Controller", 00:18:30.546 "multi_ctrlr": true, 00:18:30.547 "oacs": { 00:18:30.547 "firmware": 0, 00:18:30.547 "format": 0, 00:18:30.547 "ns_manage": 0, 00:18:30.547 "security": 0 00:18:30.547 }, 00:18:30.547 "serial_number": "00000000000000000000", 00:18:30.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.547 "vendor_id": "0x8086" 00:18:30.547 }, 00:18:30.547 "ns_data": { 00:18:30.547 "can_share": true, 00:18:30.547 "id": 1 00:18:30.547 }, 00:18:30.547 "trid": { 00:18:30.547 "adrfam": "IPv4", 00:18:30.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.547 "traddr": "10.0.0.3", 00:18:30.547 "trsvcid": "4420", 00:18:30.547 "trtype": "TCP" 00:18:30.547 }, 00:18:30.547 "vs": { 00:18:30.547 "nvme_version": "1.3" 00:18:30.547 } 00:18:30.547 } 00:18:30.547 ] 00:18:30.547 }, 00:18:30.547 "memory_domains": [ 00:18:30.547 { 00:18:30.547 "dma_device_id": "system", 00:18:30.547 "dma_device_type": 1 00:18:30.547 } 00:18:30.547 ], 00:18:30.547 "name": "nvme0n1", 00:18:30.547 "num_blocks": 2097152, 00:18:30.547 "numa_id": -1, 00:18:30.547 "product_name": "NVMe disk", 00:18:30.547 "supported_io_types": { 00:18:30.547 "abort": true, 00:18:30.547 "compare": true, 00:18:30.547 "compare_and_write": true, 00:18:30.547 "copy": true, 00:18:30.547 "flush": true, 00:18:30.547 "get_zone_info": false, 00:18:30.547 "nvme_admin": true, 00:18:30.547 "nvme_io": true, 00:18:30.547 "nvme_io_md": false, 00:18:30.547 "nvme_iov_md": false, 00:18:30.547 "read": true, 00:18:30.547 "reset": true, 00:18:30.547 "seek_data": false, 00:18:30.547 "seek_hole": false, 00:18:30.547 "unmap": false, 00:18:30.547 "write": true, 00:18:30.547 "write_zeroes": true, 00:18:30.547 "zcopy": false, 00:18:30.547 "zone_append": false, 00:18:30.547 "zone_management": false 00:18:30.547 }, 00:18:30.547 "uuid": "811a0554-adfa-425d-8995-95f8e30d4ad4", 00:18:30.547 "zoned": false 00:18:30.547 } 00:18:30.547 ] 00:18:30.547 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.547 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:30.547 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.547 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.547 [2024-10-01 15:31:29.589988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:30.547 [2024-10-01 15:31:29.590160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185fc00 (9): Bad file descriptor 00:18:30.806 [2024-10-01 15:31:29.732757] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.806 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.806 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:30.806 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.806 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.806 [ 00:18:30.806 { 00:18:30.806 "aliases": [ 00:18:30.806 "811a0554-adfa-425d-8995-95f8e30d4ad4" 00:18:30.806 ], 00:18:30.806 "assigned_rate_limits": { 00:18:30.806 "r_mbytes_per_sec": 0, 00:18:30.806 "rw_ios_per_sec": 0, 00:18:30.806 "rw_mbytes_per_sec": 0, 00:18:30.806 "w_mbytes_per_sec": 0 00:18:30.806 }, 00:18:30.806 "block_size": 512, 00:18:30.806 "claimed": false, 00:18:30.806 "driver_specific": { 00:18:30.806 "mp_policy": "active_passive", 00:18:30.806 "nvme": [ 00:18:30.806 { 00:18:30.806 "ctrlr_data": { 00:18:30.806 "ana_reporting": false, 00:18:30.806 "cntlid": 2, 00:18:30.806 "firmware_revision": "25.01", 00:18:30.806 "model_number": "SPDK bdev Controller", 00:18:30.806 "multi_ctrlr": true, 00:18:30.806 "oacs": { 00:18:30.806 "firmware": 0, 00:18:30.806 "format": 0, 00:18:30.806 "ns_manage": 0, 00:18:30.806 "security": 0 00:18:30.806 }, 00:18:30.806 "serial_number": "00000000000000000000", 00:18:30.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.806 "vendor_id": "0x8086" 00:18:30.806 }, 00:18:30.806 "ns_data": { 00:18:30.806 "can_share": true, 00:18:30.806 "id": 1 00:18:30.806 }, 00:18:30.806 "trid": { 00:18:30.807 "adrfam": "IPv4", 00:18:30.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.807 "traddr": "10.0.0.3", 00:18:30.807 "trsvcid": "4420", 00:18:30.807 "trtype": "TCP" 00:18:30.807 }, 00:18:30.807 "vs": { 00:18:30.807 "nvme_version": "1.3" 00:18:30.807 } 00:18:30.807 } 00:18:30.807 ] 00:18:30.807 }, 00:18:30.807 "memory_domains": [ 00:18:30.807 { 00:18:30.807 "dma_device_id": "system", 00:18:30.807 "dma_device_type": 1 00:18:30.807 } 00:18:30.807 ], 00:18:30.807 "name": "nvme0n1", 00:18:30.807 "num_blocks": 2097152, 00:18:30.807 "numa_id": -1, 00:18:30.807 "product_name": "NVMe disk", 00:18:30.807 "supported_io_types": { 00:18:30.807 "abort": true, 00:18:30.807 "compare": true, 00:18:30.807 "compare_and_write": true, 00:18:30.807 "copy": true, 00:18:30.807 "flush": true, 00:18:30.807 "get_zone_info": false, 00:18:30.807 "nvme_admin": true, 00:18:30.807 "nvme_io": true, 00:18:30.807 "nvme_io_md": false, 00:18:30.807 "nvme_iov_md": false, 00:18:30.807 "read": true, 00:18:30.807 "reset": true, 00:18:30.807 "seek_data": false, 00:18:30.807 "seek_hole": false, 00:18:30.807 "unmap": false, 00:18:30.807 "write": true, 00:18:30.807 "write_zeroes": true, 00:18:30.807 "zcopy": false, 00:18:30.807 "zone_append": false, 00:18:30.807 "zone_management": false 00:18:30.807 }, 00:18:30.807 "uuid": "811a0554-adfa-425d-8995-95f8e30d4ad4", 00:18:30.807 "zoned": false 00:18:30.807 } 00:18:30.807 ] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1EQApNx2Qy 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1EQApNx2Qy 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.1EQApNx2Qy 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 [2024-10-01 15:31:29.802202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.807 [2024-10-01 15:31:29.802399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 [2024-10-01 15:31:29.818219] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.807 nvme0n1 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 [ 00:18:30.807 { 00:18:30.807 "aliases": [ 00:18:30.807 "811a0554-adfa-425d-8995-95f8e30d4ad4" 00:18:30.807 ], 00:18:30.807 "assigned_rate_limits": { 00:18:30.807 "r_mbytes_per_sec": 0, 00:18:30.807 "rw_ios_per_sec": 0, 00:18:30.807 "rw_mbytes_per_sec": 0, 00:18:30.807 "w_mbytes_per_sec": 0 00:18:30.807 }, 00:18:30.807 "block_size": 512, 00:18:30.807 "claimed": false, 00:18:30.807 "driver_specific": { 00:18:30.807 "mp_policy": "active_passive", 00:18:30.807 "nvme": [ 00:18:30.807 { 00:18:30.807 "ctrlr_data": { 00:18:30.807 "ana_reporting": false, 00:18:30.807 "cntlid": 3, 00:18:30.807 "firmware_revision": "25.01", 00:18:30.807 "model_number": "SPDK bdev Controller", 00:18:30.807 "multi_ctrlr": true, 00:18:30.807 "oacs": { 00:18:30.807 "firmware": 0, 00:18:30.807 "format": 0, 00:18:30.807 "ns_manage": 0, 00:18:30.807 "security": 0 00:18:30.807 }, 00:18:30.807 "serial_number": "00000000000000000000", 00:18:30.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.807 "vendor_id": "0x8086" 00:18:30.807 }, 00:18:30.807 "ns_data": { 00:18:30.807 "can_share": true, 00:18:30.807 "id": 1 00:18:30.807 }, 00:18:30.807 "trid": { 00:18:30.807 "adrfam": "IPv4", 00:18:30.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.807 "traddr": "10.0.0.3", 00:18:30.807 "trsvcid": "4421", 00:18:30.807 "trtype": "TCP" 00:18:30.807 }, 00:18:30.807 "vs": { 00:18:30.807 "nvme_version": "1.3" 00:18:30.807 } 00:18:30.807 } 00:18:30.807 ] 00:18:30.807 }, 00:18:30.807 "memory_domains": [ 00:18:30.807 { 00:18:30.807 "dma_device_id": "system", 00:18:30.807 "dma_device_type": 1 00:18:30.807 } 00:18:30.807 ], 00:18:30.807 "name": "nvme0n1", 00:18:30.807 "num_blocks": 2097152, 00:18:30.807 "numa_id": -1, 00:18:30.807 "product_name": "NVMe disk", 00:18:30.807 "supported_io_types": { 00:18:30.807 "abort": true, 00:18:30.807 "compare": true, 00:18:30.807 "compare_and_write": true, 00:18:30.807 "copy": true, 00:18:30.807 "flush": true, 00:18:30.807 "get_zone_info": false, 00:18:30.807 "nvme_admin": true, 00:18:30.807 "nvme_io": true, 00:18:30.807 "nvme_io_md": false, 00:18:30.807 "nvme_iov_md": false, 00:18:30.807 "read": true, 00:18:30.807 "reset": true, 00:18:30.807 "seek_data": false, 00:18:30.807 "seek_hole": false, 00:18:30.807 "unmap": false, 00:18:30.807 "write": true, 00:18:30.807 "write_zeroes": true, 00:18:30.807 "zcopy": false, 00:18:30.807 "zone_append": false, 00:18:30.807 "zone_management": false 00:18:30.807 }, 00:18:30.807 "uuid": "811a0554-adfa-425d-8995-95f8e30d4ad4", 00:18:30.807 "zoned": false 00:18:30.807 } 00:18:30.807 ] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.1EQApNx2Qy 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:30.807 15:31:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:31.377 rmmod nvme_tcp 00:18:31.377 rmmod nvme_fabrics 00:18:31.377 rmmod nvme_keyring 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 86648 ']' 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 86648 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 86648 ']' 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 86648 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86648 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.377 killing process with pid 86648 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86648' 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 86648 00:18:31.377 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 86648 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:31.634 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:18:31.892 ************************************ 00:18:31.892 END TEST nvmf_async_init 00:18:31.892 00:18:31.892 real 0m2.681s 00:18:31.892 user 0m2.185s 00:18:31.892 sys 0m0.634s 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:31.892 ************************************ 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.892 15:31:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.892 ************************************ 00:18:31.892 START TEST dma 00:18:31.892 ************************************ 00:18:31.893 15:31:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:18:31.893 * Looking for test storage... 00:18:31.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:31.893 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:31.893 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:18:31.893 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.150 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.151 --rc genhtml_branch_coverage=1 00:18:32.151 --rc genhtml_function_coverage=1 00:18:32.151 --rc genhtml_legend=1 00:18:32.151 --rc geninfo_all_blocks=1 00:18:32.151 --rc geninfo_unexecuted_blocks=1 00:18:32.151 00:18:32.151 ' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.151 --rc genhtml_branch_coverage=1 00:18:32.151 --rc genhtml_function_coverage=1 00:18:32.151 --rc genhtml_legend=1 00:18:32.151 --rc geninfo_all_blocks=1 00:18:32.151 --rc geninfo_unexecuted_blocks=1 00:18:32.151 00:18:32.151 ' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.151 --rc genhtml_branch_coverage=1 00:18:32.151 --rc genhtml_function_coverage=1 00:18:32.151 --rc genhtml_legend=1 00:18:32.151 --rc geninfo_all_blocks=1 00:18:32.151 --rc geninfo_unexecuted_blocks=1 00:18:32.151 00:18:32.151 ' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:32.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.151 --rc genhtml_branch_coverage=1 00:18:32.151 --rc genhtml_function_coverage=1 00:18:32.151 --rc genhtml_legend=1 00:18:32.151 --rc geninfo_all_blocks=1 00:18:32.151 --rc geninfo_unexecuted_blocks=1 00:18:32.151 00:18:32.151 ' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.151 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:18:32.151 00:18:32.151 real 0m0.202s 00:18:32.151 user 0m0.129s 00:18:32.151 sys 0m0.083s 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:32.151 ************************************ 00:18:32.151 END TEST dma 00:18:32.151 ************************************ 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.151 ************************************ 00:18:32.151 START TEST nvmf_identify 00:18:32.151 ************************************ 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:32.151 * Looking for test storage... 00:18:32.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:18:32.151 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:32.409 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:32.409 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.409 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.409 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.409 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.409 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.410 --rc genhtml_branch_coverage=1 00:18:32.410 --rc genhtml_function_coverage=1 00:18:32.410 --rc genhtml_legend=1 00:18:32.410 --rc geninfo_all_blocks=1 00:18:32.410 --rc geninfo_unexecuted_blocks=1 00:18:32.410 00:18:32.410 ' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.410 --rc genhtml_branch_coverage=1 00:18:32.410 --rc genhtml_function_coverage=1 00:18:32.410 --rc genhtml_legend=1 00:18:32.410 --rc geninfo_all_blocks=1 00:18:32.410 --rc geninfo_unexecuted_blocks=1 00:18:32.410 00:18:32.410 ' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.410 --rc genhtml_branch_coverage=1 00:18:32.410 --rc genhtml_function_coverage=1 00:18:32.410 --rc genhtml_legend=1 00:18:32.410 --rc geninfo_all_blocks=1 00:18:32.410 --rc geninfo_unexecuted_blocks=1 00:18:32.410 00:18:32.410 ' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.410 --rc genhtml_branch_coverage=1 00:18:32.410 --rc genhtml_function_coverage=1 00:18:32.410 --rc genhtml_legend=1 00:18:32.410 --rc geninfo_all_blocks=1 00:18:32.410 --rc geninfo_unexecuted_blocks=1 00:18:32.410 00:18:32.410 ' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.410 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:32.410 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:32.411 Cannot find device "nvmf_init_br" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:32.411 Cannot find device "nvmf_init_br2" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:32.411 Cannot find device "nvmf_tgt_br" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:32.411 Cannot find device "nvmf_tgt_br2" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:32.411 Cannot find device "nvmf_init_br" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:32.411 Cannot find device "nvmf_init_br2" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:32.411 Cannot find device "nvmf_tgt_br" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:32.411 Cannot find device "nvmf_tgt_br2" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:32.411 Cannot find device "nvmf_br" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:32.411 Cannot find device "nvmf_init_if" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:32.411 Cannot find device "nvmf_init_if2" 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:32.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:32.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:32.411 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:32.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:32.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:18:32.669 00:18:32.669 --- 10.0.0.3 ping statistics --- 00:18:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.669 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:32.669 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:32.669 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:32.669 00:18:32.669 --- 10.0.0.4 ping statistics --- 00:18:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.669 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:32.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:32.669 00:18:32.669 --- 10.0.0.1 ping statistics --- 00:18:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.669 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:32.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:32.669 00:18:32.669 --- 10.0.0.2 ping statistics --- 00:18:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.669 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86969 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86969 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 86969 ']' 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.669 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.670 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.670 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.670 15:31:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:32.927 [2024-10-01 15:31:31.853323] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:32.927 [2024-10-01 15:31:31.853471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.927 [2024-10-01 15:31:32.002893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.927 [2024-10-01 15:31:32.092844] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.927 [2024-10-01 15:31:32.092952] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.927 [2024-10-01 15:31:32.092971] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.927 [2024-10-01 15:31:32.092983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.927 [2024-10-01 15:31:32.092994] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.927 [2024-10-01 15:31:32.093874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.927 [2024-10-01 15:31:32.094280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.184 [2024-10-01 15:31:32.094397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.184 [2024-10-01 15:31:32.094531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 [2024-10-01 15:31:32.933976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 Malloc0 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 15:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 [2024-10-01 15:31:33.013073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.116 [ 00:18:34.116 { 00:18:34.116 "allow_any_host": true, 00:18:34.116 "hosts": [], 00:18:34.116 "listen_addresses": [ 00:18:34.116 { 00:18:34.116 "adrfam": "IPv4", 00:18:34.116 "traddr": "10.0.0.3", 00:18:34.116 "trsvcid": "4420", 00:18:34.116 "trtype": "TCP" 00:18:34.116 } 00:18:34.116 ], 00:18:34.116 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:34.116 "subtype": "Discovery" 00:18:34.116 }, 00:18:34.116 { 00:18:34.116 "allow_any_host": true, 00:18:34.116 "hosts": [], 00:18:34.116 "listen_addresses": [ 00:18:34.116 { 00:18:34.116 "adrfam": "IPv4", 00:18:34.116 "traddr": "10.0.0.3", 00:18:34.116 "trsvcid": "4420", 00:18:34.116 "trtype": "TCP" 00:18:34.116 } 00:18:34.116 ], 00:18:34.116 "max_cntlid": 65519, 00:18:34.116 "max_namespaces": 32, 00:18:34.116 "min_cntlid": 1, 00:18:34.116 "model_number": "SPDK bdev Controller", 00:18:34.116 "namespaces": [ 00:18:34.116 { 00:18:34.116 "bdev_name": "Malloc0", 00:18:34.116 "eui64": "ABCDEF0123456789", 00:18:34.116 "name": "Malloc0", 00:18:34.116 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:34.116 "nsid": 1, 00:18:34.116 "uuid": "15f4b16a-f9e9-44df-bc6a-d6399bc2f18d" 00:18:34.116 } 00:18:34.116 ], 00:18:34.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.116 "serial_number": "SPDK00000000000001", 00:18:34.116 "subtype": "NVMe" 00:18:34.116 } 00:18:34.116 ] 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.116 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:34.116 [2024-10-01 15:31:33.062002] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:34.116 [2024-10-01 15:31:33.062051] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87022 ] 00:18:34.117 [2024-10-01 15:31:33.197855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:34.117 [2024-10-01 15:31:33.197921] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:34.117 [2024-10-01 15:31:33.197930] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:34.117 [2024-10-01 15:31:33.197946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:34.117 [2024-10-01 15:31:33.197956] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:34.117 [2024-10-01 15:31:33.198289] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:34.117 [2024-10-01 15:31:33.198364] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9dc8f0 0 00:18:34.117 [2024-10-01 15:31:33.210447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:34.117 [2024-10-01 15:31:33.210477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:34.117 [2024-10-01 15:31:33.210484] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:34.117 [2024-10-01 15:31:33.210488] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:34.117 [2024-10-01 15:31:33.210526] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.210534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.210539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.210572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:34.117 [2024-10-01 15:31:33.210616] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.218445] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.218471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.218477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.218499] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:34.117 [2024-10-01 15:31:33.218509] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:34.117 [2024-10-01 15:31:33.218516] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:34.117 [2024-10-01 15:31:33.218534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218540] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218544] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.218555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.218588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.218675] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.218689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.218696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218703] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.218713] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:34.117 [2024-10-01 15:31:33.218726] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:34.117 [2024-10-01 15:31:33.218740] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218746] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218750] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.218759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.218786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.218848] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.218860] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.218864] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218869] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.218875] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:34.117 [2024-10-01 15:31:33.218885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:34.117 [2024-10-01 15:31:33.218894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.218903] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.218912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.218942] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.218996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.219005] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.219009] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.219022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:34.117 [2024-10-01 15:31:33.219040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219057] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.219068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.219092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.219151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.219161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.219168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.219184] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:34.117 [2024-10-01 15:31:33.219193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:34.117 [2024-10-01 15:31:33.219208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:34.117 [2024-10-01 15:31:33.219319] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:34.117 [2024-10-01 15:31:33.219330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:34.117 [2024-10-01 15:31:33.219340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219345] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.219359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.219394] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.219472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.219488] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.219492] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219497] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.219503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:34.117 [2024-10-01 15:31:33.219516] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219521] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219525] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.219534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.219559] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.219617] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.117 [2024-10-01 15:31:33.219630] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.117 [2024-10-01 15:31:33.219637] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219644] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.117 [2024-10-01 15:31:33.219652] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:34.117 [2024-10-01 15:31:33.219660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:34.117 [2024-10-01 15:31:33.219670] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:34.117 [2024-10-01 15:31:33.219687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:34.117 [2024-10-01 15:31:33.219700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.117 [2024-10-01 15:31:33.219704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.117 [2024-10-01 15:31:33.219713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.117 [2024-10-01 15:31:33.219738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.117 [2024-10-01 15:31:33.219836] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.117 [2024-10-01 15:31:33.219857] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.117 [2024-10-01 15:31:33.219862] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.219867] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9dc8f0): datao=0, datal=4096, cccid=0 00:18:34.118 [2024-10-01 15:31:33.219875] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03000) on tqpair(0x9dc8f0): expected_datao=0, payload_size=4096 00:18:34.118 [2024-10-01 15:31:33.219885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.219899] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.219908] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.219922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.118 [2024-10-01 15:31:33.219930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.118 [2024-10-01 15:31:33.219933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.219938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.118 [2024-10-01 15:31:33.219948] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:34.118 [2024-10-01 15:31:33.219954] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:34.118 [2024-10-01 15:31:33.219959] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:34.118 [2024-10-01 15:31:33.219965] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:34.118 [2024-10-01 15:31:33.219969] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:34.118 [2024-10-01 15:31:33.219975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:34.118 [2024-10-01 15:31:33.219988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:34.118 [2024-10-01 15:31:33.220009] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220019] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.118 [2024-10-01 15:31:33.220074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.118 [2024-10-01 15:31:33.220139] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.118 [2024-10-01 15:31:33.220150] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.118 [2024-10-01 15:31:33.220157] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.118 [2024-10-01 15:31:33.220176] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.118 [2024-10-01 15:31:33.220206] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.118 [2024-10-01 15:31:33.220228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.118 [2024-10-01 15:31:33.220249] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.118 [2024-10-01 15:31:33.220269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:34.118 [2024-10-01 15:31:33.220289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:34.118 [2024-10-01 15:31:33.220304] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.118 [2024-10-01 15:31:33.220359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03000, cid 0, qid 0 00:18:34.118 [2024-10-01 15:31:33.220372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03180, cid 1, qid 0 00:18:34.118 [2024-10-01 15:31:33.220381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03300, cid 2, qid 0 00:18:34.118 [2024-10-01 15:31:33.220389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.118 [2024-10-01 15:31:33.220398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03600, cid 4, qid 0 00:18:34.118 [2024-10-01 15:31:33.220496] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.118 [2024-10-01 15:31:33.220508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.118 [2024-10-01 15:31:33.220515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220522] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03600) on tqpair=0x9dc8f0 00:18:34.118 [2024-10-01 15:31:33.220531] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:34.118 [2024-10-01 15:31:33.220541] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:34.118 [2024-10-01 15:31:33.220560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220566] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.118 [2024-10-01 15:31:33.220600] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03600, cid 4, qid 0 00:18:34.118 [2024-10-01 15:31:33.220677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.118 [2024-10-01 15:31:33.220697] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.118 [2024-10-01 15:31:33.220704] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220708] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9dc8f0): datao=0, datal=4096, cccid=4 00:18:34.118 [2024-10-01 15:31:33.220713] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03600) on tqpair(0x9dc8f0): expected_datao=0, payload_size=4096 00:18:34.118 [2024-10-01 15:31:33.220718] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220726] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220731] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220740] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.118 [2024-10-01 15:31:33.220747] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.118 [2024-10-01 15:31:33.220751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03600) on tqpair=0x9dc8f0 00:18:34.118 [2024-10-01 15:31:33.220778] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:34.118 [2024-10-01 15:31:33.220819] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220827] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.118 [2024-10-01 15:31:33.220844] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220860] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.220867] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.220874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.118 [2024-10-01 15:31:33.220910] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03600, cid 4, qid 0 00:18:34.118 [2024-10-01 15:31:33.220924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03780, cid 5, qid 0 00:18:34.118 [2024-10-01 15:31:33.221026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.118 [2024-10-01 15:31:33.221053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.118 [2024-10-01 15:31:33.221060] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.221064] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9dc8f0): datao=0, datal=1024, cccid=4 00:18:34.118 [2024-10-01 15:31:33.221069] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03600) on tqpair(0x9dc8f0): expected_datao=0, payload_size=1024 00:18:34.118 [2024-10-01 15:31:33.221074] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.221082] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.221087] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.221093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.118 [2024-10-01 15:31:33.221099] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.118 [2024-10-01 15:31:33.221104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.221108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03780) on tqpair=0x9dc8f0 00:18:34.118 [2024-10-01 15:31:33.261502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.118 [2024-10-01 15:31:33.261538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.118 [2024-10-01 15:31:33.261545] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.261551] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03600) on tqpair=0x9dc8f0 00:18:34.118 [2024-10-01 15:31:33.261583] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.118 [2024-10-01 15:31:33.261589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9dc8f0) 00:18:34.118 [2024-10-01 15:31:33.261602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.119 [2024-10-01 15:31:33.261649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03600, cid 4, qid 0 00:18:34.119 [2024-10-01 15:31:33.261751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.119 [2024-10-01 15:31:33.261759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.119 [2024-10-01 15:31:33.261764] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.261768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9dc8f0): datao=0, datal=3072, cccid=4 00:18:34.119 [2024-10-01 15:31:33.261773] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03600) on tqpair(0x9dc8f0): expected_datao=0, payload_size=3072 00:18:34.119 [2024-10-01 15:31:33.261779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.261790] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.261798] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.261812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.119 [2024-10-01 15:31:33.261823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.119 [2024-10-01 15:31:33.261829] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.261836] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03600) on tqpair=0x9dc8f0 00:18:34.119 [2024-10-01 15:31:33.261855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.261862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9dc8f0) 00:18:34.119 [2024-10-01 15:31:33.261870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.119 [2024-10-01 15:31:33.261904] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03600, cid 4, qid 0 00:18:34.119 [2024-10-01 15:31:33.261984] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.119 [2024-10-01 15:31:33.261995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.119 [2024-10-01 15:31:33.261999] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.262004] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9dc8f0): datao=0, datal=8, cccid=4 00:18:34.119 [2024-10-01 15:31:33.262009] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03600) on tqpair(0x9dc8f0): expected_datao=0, payload_size=8 00:18:34.119 [2024-10-01 15:31:33.262014] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.262021] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.119 [2024-10-01 15:31:33.262026] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.381 [2024-10-01 15:31:33.306468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.381 [2024-10-01 15:31:33.306504] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.381 [2024-10-01 15:31:33.306511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.381 [2024-10-01 15:31:33.306517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03600) on tqpair=0x9dc8f0 00:18:34.381 ===================================================== 00:18:34.381 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:34.381 ===================================================== 00:18:34.381 Controller Capabilities/Features 00:18:34.381 ================================ 00:18:34.381 Vendor ID: 0000 00:18:34.381 Subsystem Vendor ID: 0000 00:18:34.381 Serial Number: .................... 00:18:34.381 Model Number: ........................................ 00:18:34.381 Firmware Version: 25.01 00:18:34.381 Recommended Arb Burst: 0 00:18:34.381 IEEE OUI Identifier: 00 00 00 00:18:34.381 Multi-path I/O 00:18:34.381 May have multiple subsystem ports: No 00:18:34.381 May have multiple controllers: No 00:18:34.381 Associated with SR-IOV VF: No 00:18:34.381 Max Data Transfer Size: 131072 00:18:34.381 Max Number of Namespaces: 0 00:18:34.381 Max Number of I/O Queues: 1024 00:18:34.381 NVMe Specification Version (VS): 1.3 00:18:34.381 NVMe Specification Version (Identify): 1.3 00:18:34.381 Maximum Queue Entries: 128 00:18:34.381 Contiguous Queues Required: Yes 00:18:34.381 Arbitration Mechanisms Supported 00:18:34.381 Weighted Round Robin: Not Supported 00:18:34.381 Vendor Specific: Not Supported 00:18:34.381 Reset Timeout: 15000 ms 00:18:34.381 Doorbell Stride: 4 bytes 00:18:34.381 NVM Subsystem Reset: Not Supported 00:18:34.381 Command Sets Supported 00:18:34.381 NVM Command Set: Supported 00:18:34.381 Boot Partition: Not Supported 00:18:34.381 Memory Page Size Minimum: 4096 bytes 00:18:34.381 Memory Page Size Maximum: 4096 bytes 00:18:34.381 Persistent Memory Region: Not Supported 00:18:34.381 Optional Asynchronous Events Supported 00:18:34.381 Namespace Attribute Notices: Not Supported 00:18:34.381 Firmware Activation Notices: Not Supported 00:18:34.381 ANA Change Notices: Not Supported 00:18:34.382 PLE Aggregate Log Change Notices: Not Supported 00:18:34.382 LBA Status Info Alert Notices: Not Supported 00:18:34.382 EGE Aggregate Log Change Notices: Not Supported 00:18:34.382 Normal NVM Subsystem Shutdown event: Not Supported 00:18:34.382 Zone Descriptor Change Notices: Not Supported 00:18:34.382 Discovery Log Change Notices: Supported 00:18:34.382 Controller Attributes 00:18:34.382 128-bit Host Identifier: Not Supported 00:18:34.382 Non-Operational Permissive Mode: Not Supported 00:18:34.382 NVM Sets: Not Supported 00:18:34.382 Read Recovery Levels: Not Supported 00:18:34.382 Endurance Groups: Not Supported 00:18:34.382 Predictable Latency Mode: Not Supported 00:18:34.382 Traffic Based Keep ALive: Not Supported 00:18:34.382 Namespace Granularity: Not Supported 00:18:34.382 SQ Associations: Not Supported 00:18:34.382 UUID List: Not Supported 00:18:34.382 Multi-Domain Subsystem: Not Supported 00:18:34.382 Fixed Capacity Management: Not Supported 00:18:34.382 Variable Capacity Management: Not Supported 00:18:34.382 Delete Endurance Group: Not Supported 00:18:34.382 Delete NVM Set: Not Supported 00:18:34.382 Extended LBA Formats Supported: Not Supported 00:18:34.382 Flexible Data Placement Supported: Not Supported 00:18:34.382 00:18:34.382 Controller Memory Buffer Support 00:18:34.382 ================================ 00:18:34.382 Supported: No 00:18:34.382 00:18:34.382 Persistent Memory Region Support 00:18:34.382 ================================ 00:18:34.382 Supported: No 00:18:34.382 00:18:34.382 Admin Command Set Attributes 00:18:34.382 ============================ 00:18:34.382 Security Send/Receive: Not Supported 00:18:34.382 Format NVM: Not Supported 00:18:34.382 Firmware Activate/Download: Not Supported 00:18:34.382 Namespace Management: Not Supported 00:18:34.382 Device Self-Test: Not Supported 00:18:34.382 Directives: Not Supported 00:18:34.382 NVMe-MI: Not Supported 00:18:34.382 Virtualization Management: Not Supported 00:18:34.382 Doorbell Buffer Config: Not Supported 00:18:34.382 Get LBA Status Capability: Not Supported 00:18:34.382 Command & Feature Lockdown Capability: Not Supported 00:18:34.382 Abort Command Limit: 1 00:18:34.382 Async Event Request Limit: 4 00:18:34.382 Number of Firmware Slots: N/A 00:18:34.382 Firmware Slot 1 Read-Only: N/A 00:18:34.382 Firmware Activation Without Reset: N/A 00:18:34.382 Multiple Update Detection Support: N/A 00:18:34.382 Firmware Update Granularity: No Information Provided 00:18:34.382 Per-Namespace SMART Log: No 00:18:34.382 Asymmetric Namespace Access Log Page: Not Supported 00:18:34.382 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:34.382 Command Effects Log Page: Not Supported 00:18:34.382 Get Log Page Extended Data: Supported 00:18:34.382 Telemetry Log Pages: Not Supported 00:18:34.382 Persistent Event Log Pages: Not Supported 00:18:34.382 Supported Log Pages Log Page: May Support 00:18:34.382 Commands Supported & Effects Log Page: Not Supported 00:18:34.382 Feature Identifiers & Effects Log Page:May Support 00:18:34.382 NVMe-MI Commands & Effects Log Page: May Support 00:18:34.382 Data Area 4 for Telemetry Log: Not Supported 00:18:34.382 Error Log Page Entries Supported: 128 00:18:34.382 Keep Alive: Not Supported 00:18:34.382 00:18:34.382 NVM Command Set Attributes 00:18:34.382 ========================== 00:18:34.382 Submission Queue Entry Size 00:18:34.382 Max: 1 00:18:34.382 Min: 1 00:18:34.382 Completion Queue Entry Size 00:18:34.382 Max: 1 00:18:34.382 Min: 1 00:18:34.382 Number of Namespaces: 0 00:18:34.382 Compare Command: Not Supported 00:18:34.382 Write Uncorrectable Command: Not Supported 00:18:34.382 Dataset Management Command: Not Supported 00:18:34.382 Write Zeroes Command: Not Supported 00:18:34.382 Set Features Save Field: Not Supported 00:18:34.382 Reservations: Not Supported 00:18:34.382 Timestamp: Not Supported 00:18:34.382 Copy: Not Supported 00:18:34.382 Volatile Write Cache: Not Present 00:18:34.382 Atomic Write Unit (Normal): 1 00:18:34.382 Atomic Write Unit (PFail): 1 00:18:34.382 Atomic Compare & Write Unit: 1 00:18:34.382 Fused Compare & Write: Supported 00:18:34.382 Scatter-Gather List 00:18:34.382 SGL Command Set: Supported 00:18:34.382 SGL Keyed: Supported 00:18:34.382 SGL Bit Bucket Descriptor: Not Supported 00:18:34.382 SGL Metadata Pointer: Not Supported 00:18:34.382 Oversized SGL: Not Supported 00:18:34.382 SGL Metadata Address: Not Supported 00:18:34.382 SGL Offset: Supported 00:18:34.382 Transport SGL Data Block: Not Supported 00:18:34.382 Replay Protected Memory Block: Not Supported 00:18:34.382 00:18:34.382 Firmware Slot Information 00:18:34.382 ========================= 00:18:34.382 Active slot: 0 00:18:34.382 00:18:34.382 00:18:34.382 Error Log 00:18:34.382 ========= 00:18:34.382 00:18:34.382 Active Namespaces 00:18:34.382 ================= 00:18:34.382 Discovery Log Page 00:18:34.382 ================== 00:18:34.382 Generation Counter: 2 00:18:34.382 Number of Records: 2 00:18:34.382 Record Format: 0 00:18:34.382 00:18:34.382 Discovery Log Entry 0 00:18:34.382 ---------------------- 00:18:34.382 Transport Type: 3 (TCP) 00:18:34.382 Address Family: 1 (IPv4) 00:18:34.382 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:34.382 Entry Flags: 00:18:34.382 Duplicate Returned Information: 1 00:18:34.382 Explicit Persistent Connection Support for Discovery: 1 00:18:34.382 Transport Requirements: 00:18:34.382 Secure Channel: Not Required 00:18:34.382 Port ID: 0 (0x0000) 00:18:34.382 Controller ID: 65535 (0xffff) 00:18:34.382 Admin Max SQ Size: 128 00:18:34.382 Transport Service Identifier: 4420 00:18:34.382 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:34.382 Transport Address: 10.0.0.3 00:18:34.382 Discovery Log Entry 1 00:18:34.382 ---------------------- 00:18:34.382 Transport Type: 3 (TCP) 00:18:34.382 Address Family: 1 (IPv4) 00:18:34.382 Subsystem Type: 2 (NVM Subsystem) 00:18:34.382 Entry Flags: 00:18:34.382 Duplicate Returned Information: 0 00:18:34.382 Explicit Persistent Connection Support for Discovery: 0 00:18:34.382 Transport Requirements: 00:18:34.382 Secure Channel: Not Required 00:18:34.382 Port ID: 0 (0x0000) 00:18:34.382 Controller ID: 65535 (0xffff) 00:18:34.382 Admin Max SQ Size: 128 00:18:34.382 Transport Service Identifier: 4420 00:18:34.382 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:34.382 Transport Address: 10.0.0.3 [2024-10-01 15:31:33.306706] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:34.382 [2024-10-01 15:31:33.306727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03000) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.306737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.382 [2024-10-01 15:31:33.306744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03180) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.306749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.382 [2024-10-01 15:31:33.306755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03300) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.306760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.382 [2024-10-01 15:31:33.306766] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.306773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.382 [2024-10-01 15:31:33.306792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.306802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.306808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.382 [2024-10-01 15:31:33.306825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.382 [2024-10-01 15:31:33.306866] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.382 [2024-10-01 15:31:33.306953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.382 [2024-10-01 15:31:33.306967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.382 [2024-10-01 15:31:33.306974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.306981] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.306996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307003] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307007] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.382 [2024-10-01 15:31:33.307016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.382 [2024-10-01 15:31:33.307047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.382 [2024-10-01 15:31:33.307146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.382 [2024-10-01 15:31:33.307159] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.382 [2024-10-01 15:31:33.307167] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.307183] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:34.382 [2024-10-01 15:31:33.307192] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:34.382 [2024-10-01 15:31:33.307210] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307219] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307223] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.382 [2024-10-01 15:31:33.307231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.382 [2024-10-01 15:31:33.307258] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.382 [2024-10-01 15:31:33.307317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.382 [2024-10-01 15:31:33.307330] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.382 [2024-10-01 15:31:33.307338] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307345] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.382 [2024-10-01 15:31:33.307363] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.382 [2024-10-01 15:31:33.307374] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.307382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.307406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.307484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.307496] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.307500] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307505] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.307517] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307522] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.307537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.307574] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.307630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.307638] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.307642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.307658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307670] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.307682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.307714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.307771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.307781] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.307786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.307802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307807] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307811] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.307820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.307853] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.307911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.307921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.307925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.307941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.307950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.307958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.307982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308041] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308055] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308068] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308091] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.308126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308205] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308221] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308227] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308231] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.308262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308334] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308341] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308364] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308375] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.308406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.308557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308637] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308641] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308654] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308659] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.308695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308783] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308796] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308806] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.308837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.308911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.308922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.308926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.308944] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.308959] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.308972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.309002] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.309058] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.309069] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.309076] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.309101] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309108] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.309120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.309143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.309206] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.309218] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.309223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.309239] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309250] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.309262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.309298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.309351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.309360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.309364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.309380] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.383 [2024-10-01 15:31:33.309398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.383 [2024-10-01 15:31:33.309420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.383 [2024-10-01 15:31:33.309494] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.383 [2024-10-01 15:31:33.309502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.383 [2024-10-01 15:31:33.309506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.383 [2024-10-01 15:31:33.309522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.383 [2024-10-01 15:31:33.309527] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.309539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.309561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.309616] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.309623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.309627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.309643] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309648] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309652] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.309660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.309679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.309735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.309742] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.309746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.309762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.309779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.309798] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.309855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.309862] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.309866] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.309881] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309886] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309890] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.309898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.309917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.309976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.309984] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.309988] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.309992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.310003] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.310020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.310038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.310100] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.310108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.310112] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310116] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.310127] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310132] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310136] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.310145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.310165] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.310221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.310228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.310233] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.310248] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310258] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.310265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.310284] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.310342] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.310349] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.310354] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.310369] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310374] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.310378] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.310386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.310405] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.314448] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.314473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.314479] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.314484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.314498] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.314504] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.314508] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9dc8f0) 00:18:34.384 [2024-10-01 15:31:33.314518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.314548] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03480, cid 3, qid 0 00:18:34.384 [2024-10-01 15:31:33.314610] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.314617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.314621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.314626] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03480) on tqpair=0x9dc8f0 00:18:34.384 [2024-10-01 15:31:33.314634] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:18:34.384 00:18:34.384 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:34.384 [2024-10-01 15:31:33.355057] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:34.384 [2024-10-01 15:31:33.355107] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87024 ] 00:18:34.384 [2024-10-01 15:31:33.490805] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:34.384 [2024-10-01 15:31:33.490875] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:34.384 [2024-10-01 15:31:33.490883] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:34.384 [2024-10-01 15:31:33.490896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:34.384 [2024-10-01 15:31:33.490908] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:34.384 [2024-10-01 15:31:33.491257] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:34.384 [2024-10-01 15:31:33.491330] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x92c8f0 0 00:18:34.384 [2024-10-01 15:31:33.503451] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:34.384 [2024-10-01 15:31:33.503480] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:34.384 [2024-10-01 15:31:33.503488] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:34.384 [2024-10-01 15:31:33.503492] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:34.384 [2024-10-01 15:31:33.503528] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.503537] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.503541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.384 [2024-10-01 15:31:33.503557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:34.384 [2024-10-01 15:31:33.503592] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.384 [2024-10-01 15:31:33.511448] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.511472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.511478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.511484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.384 [2024-10-01 15:31:33.511495] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:34.384 [2024-10-01 15:31:33.511505] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:34.384 [2024-10-01 15:31:33.511512] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:34.384 [2024-10-01 15:31:33.511530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.511536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.511541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.384 [2024-10-01 15:31:33.511551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.511584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.384 [2024-10-01 15:31:33.511655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.511663] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.511667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.511671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.384 [2024-10-01 15:31:33.511678] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:34.384 [2024-10-01 15:31:33.511686] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:34.384 [2024-10-01 15:31:33.511694] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.511699] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.384 [2024-10-01 15:31:33.511703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.384 [2024-10-01 15:31:33.511711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.384 [2024-10-01 15:31:33.511733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.384 [2024-10-01 15:31:33.511795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.384 [2024-10-01 15:31:33.511803] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.384 [2024-10-01 15:31:33.511807] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.511811] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.511817] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:34.385 [2024-10-01 15:31:33.511827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:34.385 [2024-10-01 15:31:33.511834] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.511839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.511843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.511851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.385 [2024-10-01 15:31:33.511871] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.511929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.511937] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.511941] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.511945] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.511951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:34.385 [2024-10-01 15:31:33.511962] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.511967] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.511972] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.511979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.385 [2024-10-01 15:31:33.511999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.512056] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.512063] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.512067] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.512077] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:34.385 [2024-10-01 15:31:33.512083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:34.385 [2024-10-01 15:31:33.512091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:34.385 [2024-10-01 15:31:33.512198] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:34.385 [2024-10-01 15:31:33.512203] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:34.385 [2024-10-01 15:31:33.512213] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.512230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.385 [2024-10-01 15:31:33.512251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.512311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.512324] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.512329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.512340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:34.385 [2024-10-01 15:31:33.512352] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512357] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.512370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.385 [2024-10-01 15:31:33.512391] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.512464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.512474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.512478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.512488] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:34.385 [2024-10-01 15:31:33.512494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:34.385 [2024-10-01 15:31:33.512503] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:34.385 [2024-10-01 15:31:33.512520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:34.385 [2024-10-01 15:31:33.512531] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512536] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.512545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.385 [2024-10-01 15:31:33.512569] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.512673] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.385 [2024-10-01 15:31:33.512681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.385 [2024-10-01 15:31:33.512685] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512689] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=4096, cccid=0 00:18:34.385 [2024-10-01 15:31:33.512695] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953000) on tqpair(0x92c8f0): expected_datao=0, payload_size=4096 00:18:34.385 [2024-10-01 15:31:33.512700] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512709] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512714] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.512730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.512733] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512738] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.512747] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:34.385 [2024-10-01 15:31:33.512753] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:34.385 [2024-10-01 15:31:33.512758] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:34.385 [2024-10-01 15:31:33.512763] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:34.385 [2024-10-01 15:31:33.512768] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:34.385 [2024-10-01 15:31:33.512774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:34.385 [2024-10-01 15:31:33.512784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:34.385 [2024-10-01 15:31:33.512797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.512816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.385 [2024-10-01 15:31:33.512839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.512914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.512924] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.512928] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.385 [2024-10-01 15:31:33.512942] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.512958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.385 [2024-10-01 15:31:33.512965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.512980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.385 [2024-10-01 15:31:33.512987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512991] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.512995] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.513001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.385 [2024-10-01 15:31:33.513008] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.513012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.513016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.513023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.385 [2024-10-01 15:31:33.513028] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:34.385 [2024-10-01 15:31:33.513043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:34.385 [2024-10-01 15:31:33.513051] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.385 [2024-10-01 15:31:33.513055] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.385 [2024-10-01 15:31:33.513063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.385 [2024-10-01 15:31:33.513087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953000, cid 0, qid 0 00:18:34.385 [2024-10-01 15:31:33.513095] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953180, cid 1, qid 0 00:18:34.385 [2024-10-01 15:31:33.513101] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953300, cid 2, qid 0 00:18:34.385 [2024-10-01 15:31:33.513106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.385 [2024-10-01 15:31:33.513112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.385 [2024-10-01 15:31:33.513207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.385 [2024-10-01 15:31:33.513215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.385 [2024-10-01 15:31:33.513219] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.513229] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:34.386 [2024-10-01 15:31:33.513235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513250] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.513300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:34.386 [2024-10-01 15:31:33.513330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.386 [2024-10-01 15:31:33.513397] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.513405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.513409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513413] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.513499] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513524] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513529] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.513537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.513561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.386 [2024-10-01 15:31:33.513634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.386 [2024-10-01 15:31:33.513641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.386 [2024-10-01 15:31:33.513645] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513650] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=4096, cccid=4 00:18:34.386 [2024-10-01 15:31:33.513655] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953600) on tqpair(0x92c8f0): expected_datao=0, payload_size=4096 00:18:34.386 [2024-10-01 15:31:33.513660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513668] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513672] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.513688] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.513692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.513716] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:34.386 [2024-10-01 15:31:33.513727] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513748] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513752] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.513760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.513783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.386 [2024-10-01 15:31:33.513894] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.386 [2024-10-01 15:31:33.513902] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.386 [2024-10-01 15:31:33.513906] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513910] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=4096, cccid=4 00:18:34.386 [2024-10-01 15:31:33.513915] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953600) on tqpair(0x92c8f0): expected_datao=0, payload_size=4096 00:18:34.386 [2024-10-01 15:31:33.513920] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513928] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513932] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.513947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.513951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513956] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.513968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.513988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.513993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514023] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.386 [2024-10-01 15:31:33.514094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.386 [2024-10-01 15:31:33.514102] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.386 [2024-10-01 15:31:33.514106] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514110] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=4096, cccid=4 00:18:34.386 [2024-10-01 15:31:33.514115] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953600) on tqpair(0x92c8f0): expected_datao=0, payload_size=4096 00:18:34.386 [2024-10-01 15:31:33.514120] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514127] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514132] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514140] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.514147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.514151] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514155] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.514170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514216] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:34.386 [2024-10-01 15:31:33.514221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:34.386 [2024-10-01 15:31:33.514227] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:34.386 [2024-10-01 15:31:33.514247] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514268] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514272] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.386 [2024-10-01 15:31:33.514311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.386 [2024-10-01 15:31:33.514319] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953780, cid 5, qid 0 00:18:34.386 [2024-10-01 15:31:33.514390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.514398] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.514402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514407] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.514414] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.514437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.514443] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953780) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.514461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514497] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953780, cid 5, qid 0 00:18:34.386 [2024-10-01 15:31:33.514571] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.514578] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.514583] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514587] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953780) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.514599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514603] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514631] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953780, cid 5, qid 0 00:18:34.386 [2024-10-01 15:31:33.514692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.514700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.514704] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514708] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953780) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.514720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514725] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514752] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953780, cid 5, qid 0 00:18:34.386 [2024-10-01 15:31:33.514809] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.386 [2024-10-01 15:31:33.514817] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.386 [2024-10-01 15:31:33.514821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514825] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953780) on tqpair=0x92c8f0 00:18:34.386 [2024-10-01 15:31:33.514847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514853] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514869] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514874] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x92c8f0) 00:18:34.386 [2024-10-01 15:31:33.514881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.386 [2024-10-01 15:31:33.514889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.386 [2024-10-01 15:31:33.514893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x92c8f0) 00:18:34.387 [2024-10-01 15:31:33.514900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.387 [2024-10-01 15:31:33.514909] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.514913] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x92c8f0) 00:18:34.387 [2024-10-01 15:31:33.514920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.387 [2024-10-01 15:31:33.514943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953780, cid 5, qid 0 00:18:34.387 [2024-10-01 15:31:33.514951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953600, cid 4, qid 0 00:18:34.387 [2024-10-01 15:31:33.514956] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953900, cid 6, qid 0 00:18:34.387 [2024-10-01 15:31:33.514962] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953a80, cid 7, qid 0 00:18:34.387 [2024-10-01 15:31:33.515105] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.387 [2024-10-01 15:31:33.515112] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.387 [2024-10-01 15:31:33.515116] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515120] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=8192, cccid=5 00:18:34.387 [2024-10-01 15:31:33.515125] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953780) on tqpair(0x92c8f0): expected_datao=0, payload_size=8192 00:18:34.387 [2024-10-01 15:31:33.515130] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515148] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515154] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.387 [2024-10-01 15:31:33.515167] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.387 [2024-10-01 15:31:33.515171] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515175] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=512, cccid=4 00:18:34.387 [2024-10-01 15:31:33.515180] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953600) on tqpair(0x92c8f0): expected_datao=0, payload_size=512 00:18:34.387 [2024-10-01 15:31:33.515185] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515192] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515196] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.387 [2024-10-01 15:31:33.515209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.387 [2024-10-01 15:31:33.515212] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515216] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=512, cccid=6 00:18:34.387 [2024-10-01 15:31:33.515221] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953900) on tqpair(0x92c8f0): expected_datao=0, payload_size=512 00:18:34.387 [2024-10-01 15:31:33.515226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515232] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515236] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:34.387 [2024-10-01 15:31:33.515249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:34.387 [2024-10-01 15:31:33.515252] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515256] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x92c8f0): datao=0, datal=4096, cccid=7 00:18:34.387 [2024-10-01 15:31:33.515261] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953a80) on tqpair(0x92c8f0): expected_datao=0, payload_size=4096 00:18:34.387 [2024-10-01 15:31:33.515266] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515273] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515277] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.387 [2024-10-01 15:31:33.515293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.387 [2024-10-01 15:31:33.515296] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515301] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953780) on tqpair=0x92c8f0 00:18:34.387 [2024-10-01 15:31:33.515320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.387 [2024-10-01 15:31:33.515328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.387 [2024-10-01 15:31:33.515331] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953600) on tqpair=0x92c8f0 00:18:34.387 [2024-10-01 15:31:33.515355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.387 [2024-10-01 15:31:33.515362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.387 [2024-10-01 15:31:33.515366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953900) on tqpair=0x92c8f0 00:18:34.387 [2024-10-01 15:31:33.515378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.387 [2024-10-01 15:31:33.515385] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.387 [2024-10-01 15:31:33.515389] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.387 [2024-10-01 15:31:33.515393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953a80) on tqpair=0x92c8f0 00:18:34.387 ===================================================== 00:18:34.387 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.387 ===================================================== 00:18:34.387 Controller Capabilities/Features 00:18:34.387 ================================ 00:18:34.387 Vendor ID: 8086 00:18:34.387 Subsystem Vendor ID: 8086 00:18:34.387 Serial Number: SPDK00000000000001 00:18:34.387 Model Number: SPDK bdev Controller 00:18:34.387 Firmware Version: 25.01 00:18:34.387 Recommended Arb Burst: 6 00:18:34.387 IEEE OUI Identifier: e4 d2 5c 00:18:34.387 Multi-path I/O 00:18:34.387 May have multiple subsystem ports: Yes 00:18:34.387 May have multiple controllers: Yes 00:18:34.387 Associated with SR-IOV VF: No 00:18:34.387 Max Data Transfer Size: 131072 00:18:34.387 Max Number of Namespaces: 32 00:18:34.387 Max Number of I/O Queues: 127 00:18:34.387 NVMe Specification Version (VS): 1.3 00:18:34.387 NVMe Specification Version (Identify): 1.3 00:18:34.387 Maximum Queue Entries: 128 00:18:34.387 Contiguous Queues Required: Yes 00:18:34.387 Arbitration Mechanisms Supported 00:18:34.387 Weighted Round Robin: Not Supported 00:18:34.387 Vendor Specific: Not Supported 00:18:34.387 Reset Timeout: 15000 ms 00:18:34.387 Doorbell Stride: 4 bytes 00:18:34.387 NVM Subsystem Reset: Not Supported 00:18:34.387 Command Sets Supported 00:18:34.387 NVM Command Set: Supported 00:18:34.387 Boot Partition: Not Supported 00:18:34.387 Memory Page Size Minimum: 4096 bytes 00:18:34.387 Memory Page Size Maximum: 4096 bytes 00:18:34.387 Persistent Memory Region: Not Supported 00:18:34.387 Optional Asynchronous Events Supported 00:18:34.387 Namespace Attribute Notices: Supported 00:18:34.387 Firmware Activation Notices: Not Supported 00:18:34.387 ANA Change Notices: Not Supported 00:18:34.387 PLE Aggregate Log Change Notices: Not Supported 00:18:34.387 LBA Status Info Alert Notices: Not Supported 00:18:34.387 EGE Aggregate Log Change Notices: Not Supported 00:18:34.387 Normal NVM Subsystem Shutdown event: Not Supported 00:18:34.387 Zone Descriptor Change Notices: Not Supported 00:18:34.387 Discovery Log Change Notices: Not Supported 00:18:34.387 Controller Attributes 00:18:34.387 128-bit Host Identifier: Supported 00:18:34.387 Non-Operational Permissive Mode: Not Supported 00:18:34.387 NVM Sets: Not Supported 00:18:34.387 Read Recovery Levels: Not Supported 00:18:34.387 Endurance Groups: Not Supported 00:18:34.387 Predictable Latency Mode: Not Supported 00:18:34.387 Traffic Based Keep ALive: Not Supported 00:18:34.387 Namespace Granularity: Not Supported 00:18:34.387 SQ Associations: Not Supported 00:18:34.387 UUID List: Not Supported 00:18:34.387 Multi-Domain Subsystem: Not Supported 00:18:34.387 Fixed Capacity Management: Not Supported 00:18:34.387 Variable Capacity Management: Not Supported 00:18:34.387 Delete Endurance Group: Not Supported 00:18:34.387 Delete NVM Set: Not Supported 00:18:34.387 Extended LBA Formats Supported: Not Supported 00:18:34.387 Flexible Data Placement Supported: Not Supported 00:18:34.387 00:18:34.387 Controller Memory Buffer Support 00:18:34.387 ================================ 00:18:34.387 Supported: No 00:18:34.387 00:18:34.387 Persistent Memory Region Support 00:18:34.387 ================================ 00:18:34.387 Supported: No 00:18:34.387 00:18:34.387 Admin Command Set Attributes 00:18:34.387 ============================ 00:18:34.387 Security Send/Receive: Not Supported 00:18:34.387 Format NVM: Not Supported 00:18:34.387 Firmware Activate/Download: Not Supported 00:18:34.387 Namespace Management: Not Supported 00:18:34.387 Device Self-Test: Not Supported 00:18:34.387 Directives: Not Supported 00:18:34.387 NVMe-MI: Not Supported 00:18:34.387 Virtualization Management: Not Supported 00:18:34.387 Doorbell Buffer Config: Not Supported 00:18:34.387 Get LBA Status Capability: Not Supported 00:18:34.387 Command & Feature Lockdown Capability: Not Supported 00:18:34.387 Abort Command Limit: 4 00:18:34.387 Async Event Request Limit: 4 00:18:34.387 Number of Firmware Slots: N/A 00:18:34.387 Firmware Slot 1 Read-Only: N/A 00:18:34.387 Firmware Activation Without Reset: N/A 00:18:34.387 Multiple Update Detection Support: N/A 00:18:34.387 Firmware Update Granularity: No Information Provided 00:18:34.387 Per-Namespace SMART Log: No 00:18:34.387 Asymmetric Namespace Access Log Page: Not Supported 00:18:34.387 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:34.387 Command Effects Log Page: Supported 00:18:34.387 Get Log Page Extended Data: Supported 00:18:34.387 Telemetry Log Pages: Not Supported 00:18:34.387 Persistent Event Log Pages: Not Supported 00:18:34.387 Supported Log Pages Log Page: May Support 00:18:34.387 Commands Supported & Effects Log Page: Not Supported 00:18:34.387 Feature Identifiers & Effects Log Page:May Support 00:18:34.387 NVMe-MI Commands & Effects Log Page: May Support 00:18:34.387 Data Area 4 for Telemetry Log: Not Supported 00:18:34.387 Error Log Page Entries Supported: 128 00:18:34.387 Keep Alive: Supported 00:18:34.387 Keep Alive Granularity: 10000 ms 00:18:34.387 00:18:34.387 NVM Command Set Attributes 00:18:34.387 ========================== 00:18:34.387 Submission Queue Entry Size 00:18:34.387 Max: 64 00:18:34.387 Min: 64 00:18:34.387 Completion Queue Entry Size 00:18:34.387 Max: 16 00:18:34.387 Min: 16 00:18:34.387 Number of Namespaces: 32 00:18:34.387 Compare Command: Supported 00:18:34.387 Write Uncorrectable Command: Not Supported 00:18:34.387 Dataset Management Command: Supported 00:18:34.387 Write Zeroes Command: Supported 00:18:34.387 Set Features Save Field: Not Supported 00:18:34.387 Reservations: Supported 00:18:34.387 Timestamp: Not Supported 00:18:34.387 Copy: Supported 00:18:34.387 Volatile Write Cache: Present 00:18:34.387 Atomic Write Unit (Normal): 1 00:18:34.387 Atomic Write Unit (PFail): 1 00:18:34.387 Atomic Compare & Write Unit: 1 00:18:34.387 Fused Compare & Write: Supported 00:18:34.387 Scatter-Gather List 00:18:34.387 SGL Command Set: Supported 00:18:34.387 SGL Keyed: Supported 00:18:34.387 SGL Bit Bucket Descriptor: Not Supported 00:18:34.387 SGL Metadata Pointer: Not Supported 00:18:34.387 Oversized SGL: Not Supported 00:18:34.387 SGL Metadata Address: Not Supported 00:18:34.387 SGL Offset: Supported 00:18:34.387 Transport SGL Data Block: Not Supported 00:18:34.387 Replay Protected Memory Block: Not Supported 00:18:34.387 00:18:34.387 Firmware Slot Information 00:18:34.387 ========================= 00:18:34.387 Active slot: 1 00:18:34.387 Slot 1 Firmware Revision: 25.01 00:18:34.387 00:18:34.387 00:18:34.387 Commands Supported and Effects 00:18:34.387 ============================== 00:18:34.387 Admin Commands 00:18:34.388 -------------- 00:18:34.388 Get Log Page (02h): Supported 00:18:34.388 Identify (06h): Supported 00:18:34.388 Abort (08h): Supported 00:18:34.388 Set Features (09h): Supported 00:18:34.388 Get Features (0Ah): Supported 00:18:34.388 Asynchronous Event Request (0Ch): Supported 00:18:34.388 Keep Alive (18h): Supported 00:18:34.388 I/O Commands 00:18:34.388 ------------ 00:18:34.388 Flush (00h): Supported LBA-Change 00:18:34.388 Write (01h): Supported LBA-Change 00:18:34.388 Read (02h): Supported 00:18:34.388 Compare (05h): Supported 00:18:34.388 Write Zeroes (08h): Supported LBA-Change 00:18:34.388 Dataset Management (09h): Supported LBA-Change 00:18:34.388 Copy (19h): Supported LBA-Change 00:18:34.388 00:18:34.388 Error Log 00:18:34.388 ========= 00:18:34.388 00:18:34.388 Arbitration 00:18:34.388 =========== 00:18:34.388 Arbitration Burst: 1 00:18:34.388 00:18:34.388 Power Management 00:18:34.388 ================ 00:18:34.388 Number of Power States: 1 00:18:34.388 Current Power State: Power State #0 00:18:34.388 Power State #0: 00:18:34.388 Max Power: 0.00 W 00:18:34.388 Non-Operational State: Operational 00:18:34.388 Entry Latency: Not Reported 00:18:34.388 Exit Latency: Not Reported 00:18:34.388 Relative Read Throughput: 0 00:18:34.388 Relative Read Latency: 0 00:18:34.388 Relative Write Throughput: 0 00:18:34.388 Relative Write Latency: 0 00:18:34.388 Idle Power: Not Reported 00:18:34.388 Active Power: Not Reported 00:18:34.388 Non-Operational Permissive Mode: Not Supported 00:18:34.388 00:18:34.388 Health Information 00:18:34.388 ================== 00:18:34.388 Critical Warnings: 00:18:34.388 Available Spare Space: OK 00:18:34.388 Temperature: OK 00:18:34.388 Device Reliability: OK 00:18:34.388 Read Only: No 00:18:34.388 Volatile Memory Backup: OK 00:18:34.388 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:34.388 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:34.388 Available Spare: 0% 00:18:34.388 Available Spare Threshold: 0% 00:18:34.388 Life Percentage Used:[2024-10-01 15:31:33.519534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.519557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.519590] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953a80, cid 7, qid 0 00:18:34.388 [2024-10-01 15:31:33.519670] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.519679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.519683] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519687] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953a80) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.519748] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:34.388 [2024-10-01 15:31:33.519763] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953000) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.519771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.388 [2024-10-01 15:31:33.519778] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953180) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.519783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.388 [2024-10-01 15:31:33.519788] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953300) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.519794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.388 [2024-10-01 15:31:33.519799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.519804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.388 [2024-10-01 15:31:33.519815] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519820] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.519833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.519860] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.519922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.519929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.519934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.519947] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.519956] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.519964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.519988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520082] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520087] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520092] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:34.388 [2024-10-01 15:31:33.520098] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:34.388 [2024-10-01 15:31:33.520109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520146] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520211] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520215] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520237] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520241] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520324] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520336] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520351] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520360] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520388] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520460] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520470] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520530] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520594] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520598] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520614] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520619] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520623] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520651] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520705] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520713] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520717] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520737] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520769] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520822] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520830] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520834] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520867] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.520887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.520910] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.520970] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.520978] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.520982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.520986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.520998] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.521003] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.521007] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.388 [2024-10-01 15:31:33.521015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.388 [2024-10-01 15:31:33.521035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.388 [2024-10-01 15:31:33.521093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.388 [2024-10-01 15:31:33.521100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.388 [2024-10-01 15:31:33.521104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.388 [2024-10-01 15:31:33.521108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.388 [2024-10-01 15:31:33.521119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521124] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521156] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521226] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.521241] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521247] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521254] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521299] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521353] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.521381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521386] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521494] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.521522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521527] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521633] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.521644] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521750] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521754] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.521765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521770] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521801] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.521887] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521892] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521896] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.521904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.521923] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.521982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.521989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.521993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.521998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522009] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522014] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522018] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522112] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522133] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522252] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522288] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522362] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522366] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522388] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522500] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522504] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522509] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522521] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522525] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522530] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522614] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522626] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522630] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522641] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522646] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522678] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522749] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522805] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522861] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.389 [2024-10-01 15:31:33.522868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.389 [2024-10-01 15:31:33.522872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522877] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.389 [2024-10-01 15:31:33.522888] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.389 [2024-10-01 15:31:33.522897] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.389 [2024-10-01 15:31:33.522905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.389 [2024-10-01 15:31:33.522924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.389 [2024-10-01 15:31:33.522986] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.390 [2024-10-01 15:31:33.523002] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.390 [2024-10-01 15:31:33.523007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523012] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.390 [2024-10-01 15:31:33.523024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523029] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523034] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.390 [2024-10-01 15:31:33.523041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.390 [2024-10-01 15:31:33.523062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.390 [2024-10-01 15:31:33.523117] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.390 [2024-10-01 15:31:33.523135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.390 [2024-10-01 15:31:33.523140] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.390 [2024-10-01 15:31:33.523157] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523162] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.390 [2024-10-01 15:31:33.523174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.390 [2024-10-01 15:31:33.523195] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.390 [2024-10-01 15:31:33.523250] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.390 [2024-10-01 15:31:33.523258] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.390 [2024-10-01 15:31:33.523263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523267] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.390 [2024-10-01 15:31:33.523278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.390 [2024-10-01 15:31:33.523295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.390 [2024-10-01 15:31:33.523315] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.390 [2024-10-01 15:31:33.523375] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.390 [2024-10-01 15:31:33.523390] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.390 [2024-10-01 15:31:33.523395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.390 [2024-10-01 15:31:33.523412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.523417] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.527441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x92c8f0) 00:18:34.390 [2024-10-01 15:31:33.527458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.390 [2024-10-01 15:31:33.527490] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953480, cid 3, qid 0 00:18:34.390 [2024-10-01 15:31:33.527557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:34.390 [2024-10-01 15:31:33.527565] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:34.390 [2024-10-01 15:31:33.527570] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:34.390 [2024-10-01 15:31:33.527574] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x953480) on tqpair=0x92c8f0 00:18:34.390 [2024-10-01 15:31:33.527584] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:18:34.390 0% 00:18:34.390 Data Units Read: 0 00:18:34.390 Data Units Written: 0 00:18:34.390 Host Read Commands: 0 00:18:34.390 Host Write Commands: 0 00:18:34.390 Controller Busy Time: 0 minutes 00:18:34.390 Power Cycles: 0 00:18:34.390 Power On Hours: 0 hours 00:18:34.390 Unsafe Shutdowns: 0 00:18:34.390 Unrecoverable Media Errors: 0 00:18:34.390 Lifetime Error Log Entries: 0 00:18:34.390 Warning Temperature Time: 0 minutes 00:18:34.390 Critical Temperature Time: 0 minutes 00:18:34.390 00:18:34.390 Number of Queues 00:18:34.390 ================ 00:18:34.390 Number of I/O Submission Queues: 127 00:18:34.390 Number of I/O Completion Queues: 127 00:18:34.390 00:18:34.390 Active Namespaces 00:18:34.390 ================= 00:18:34.390 Namespace ID:1 00:18:34.390 Error Recovery Timeout: Unlimited 00:18:34.390 Command Set Identifier: NVM (00h) 00:18:34.390 Deallocate: Supported 00:18:34.390 Deallocated/Unwritten Error: Not Supported 00:18:34.390 Deallocated Read Value: Unknown 00:18:34.390 Deallocate in Write Zeroes: Not Supported 00:18:34.390 Deallocated Guard Field: 0xFFFF 00:18:34.390 Flush: Supported 00:18:34.390 Reservation: Supported 00:18:34.390 Namespace Sharing Capabilities: Multiple Controllers 00:18:34.390 Size (in LBAs): 131072 (0GiB) 00:18:34.390 Capacity (in LBAs): 131072 (0GiB) 00:18:34.390 Utilization (in LBAs): 131072 (0GiB) 00:18:34.390 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:34.390 EUI64: ABCDEF0123456789 00:18:34.390 UUID: 15f4b16a-f9e9-44df-bc6a-d6399bc2f18d 00:18:34.390 Thin Provisioning: Not Supported 00:18:34.390 Per-NS Atomic Units: Yes 00:18:34.390 Atomic Boundary Size (Normal): 0 00:18:34.390 Atomic Boundary Size (PFail): 0 00:18:34.390 Atomic Boundary Offset: 0 00:18:34.390 Maximum Single Source Range Length: 65535 00:18:34.390 Maximum Copy Length: 65535 00:18:34.390 Maximum Source Range Count: 1 00:18:34.390 NGUID/EUI64 Never Reused: No 00:18:34.390 Namespace Write Protected: No 00:18:34.390 Number of LBA Formats: 1 00:18:34.390 Current LBA Format: LBA Format #00 00:18:34.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:34.390 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.648 rmmod nvme_tcp 00:18:34.648 rmmod nvme_fabrics 00:18:34.648 rmmod nvme_keyring 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 86969 ']' 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 86969 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 86969 ']' 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 86969 00:18:34.648 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86969 00:18:34.649 killing process with pid 86969 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86969' 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 86969 00:18:34.649 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 86969 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.906 15:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:34.906 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:34.906 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:35.164 00:18:35.164 real 0m2.967s 00:18:35.164 user 0m7.677s 00:18:35.164 sys 0m0.723s 00:18:35.164 ************************************ 00:18:35.164 END TEST nvmf_identify 00:18:35.164 ************************************ 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:35.164 15:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.165 ************************************ 00:18:35.165 START TEST nvmf_perf 00:18:35.165 ************************************ 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:35.165 * Looking for test storage... 00:18:35.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:18:35.165 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:35.421 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:35.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.422 --rc genhtml_branch_coverage=1 00:18:35.422 --rc genhtml_function_coverage=1 00:18:35.422 --rc genhtml_legend=1 00:18:35.422 --rc geninfo_all_blocks=1 00:18:35.422 --rc geninfo_unexecuted_blocks=1 00:18:35.422 00:18:35.422 ' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:35.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.422 --rc genhtml_branch_coverage=1 00:18:35.422 --rc genhtml_function_coverage=1 00:18:35.422 --rc genhtml_legend=1 00:18:35.422 --rc geninfo_all_blocks=1 00:18:35.422 --rc geninfo_unexecuted_blocks=1 00:18:35.422 00:18:35.422 ' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:35.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.422 --rc genhtml_branch_coverage=1 00:18:35.422 --rc genhtml_function_coverage=1 00:18:35.422 --rc genhtml_legend=1 00:18:35.422 --rc geninfo_all_blocks=1 00:18:35.422 --rc geninfo_unexecuted_blocks=1 00:18:35.422 00:18:35.422 ' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:35.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.422 --rc genhtml_branch_coverage=1 00:18:35.422 --rc genhtml_function_coverage=1 00:18:35.422 --rc genhtml_legend=1 00:18:35.422 --rc geninfo_all_blocks=1 00:18:35.422 --rc geninfo_unexecuted_blocks=1 00:18:35.422 00:18:35.422 ' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.422 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:35.422 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:35.423 Cannot find device "nvmf_init_br" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:35.423 Cannot find device "nvmf_init_br2" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:35.423 Cannot find device "nvmf_tgt_br" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.423 Cannot find device "nvmf_tgt_br2" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:35.423 Cannot find device "nvmf_init_br" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:35.423 Cannot find device "nvmf_init_br2" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:35.423 Cannot find device "nvmf_tgt_br" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:35.423 Cannot find device "nvmf_tgt_br2" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:35.423 Cannot find device "nvmf_br" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:35.423 Cannot find device "nvmf_init_if" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:35.423 Cannot find device "nvmf_init_if2" 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.423 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:35.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:18:35.681 00:18:35.681 --- 10.0.0.3 ping statistics --- 00:18:35.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.681 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:35.681 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:35.681 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:35.681 00:18:35.681 --- 10.0.0.4 ping statistics --- 00:18:35.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.681 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:35.681 00:18:35.681 --- 10.0.0.1 ping statistics --- 00:18:35.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.681 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:35.681 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:35.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:18:35.681 00:18:35.681 --- 10.0.0.2 ping statistics --- 00:18:35.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.682 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=87249 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 87249 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 87249 ']' 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.682 15:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:35.940 [2024-10-01 15:31:34.885040] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:35.940 [2024-10-01 15:31:34.885119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.940 [2024-10-01 15:31:35.029648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.197 [2024-10-01 15:31:35.119491] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.197 [2024-10-01 15:31:35.119559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.197 [2024-10-01 15:31:35.119576] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.197 [2024-10-01 15:31:35.119588] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.197 [2024-10-01 15:31:35.119600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.197 [2024-10-01 15:31:35.119692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.197 [2024-10-01 15:31:35.119771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.197 [2024-10-01 15:31:35.119914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.197 [2024-10-01 15:31:35.119931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:36.197 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:36.761 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:36.762 15:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:37.326 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:37.326 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.583 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:37.583 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:37.583 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:37.583 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:37.583 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.841 [2024-10-01 15:31:36.808602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.841 15:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.098 15:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:38.098 15:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.356 15:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:38.356 15:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:38.920 15:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:39.178 [2024-10-01 15:31:38.238603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:39.178 15:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:39.743 15:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:39.743 15:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:39.743 15:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:39.743 15:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:40.677 Initializing NVMe Controllers 00:18:40.677 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:40.677 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:40.677 Initialization complete. Launching workers. 00:18:40.677 ======================================================== 00:18:40.677 Latency(us) 00:18:40.677 Device Information : IOPS MiB/s Average min max 00:18:40.677 PCIE (0000:00:10.0) NSID 1 from core 0: 25504.00 99.62 1254.34 304.10 5796.23 00:18:40.677 ======================================================== 00:18:40.677 Total : 25504.00 99.62 1254.34 304.10 5796.23 00:18:40.677 00:18:40.677 15:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.086 Initializing NVMe Controllers 00:18:42.086 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.086 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.086 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:42.086 Initialization complete. Launching workers. 00:18:42.086 ======================================================== 00:18:42.086 Latency(us) 00:18:42.086 Device Information : IOPS MiB/s Average min max 00:18:42.086 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2397.60 9.37 416.62 124.31 6208.61 00:18:42.086 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.88 0.48 8136.06 5971.70 12011.64 00:18:42.086 ======================================================== 00:18:42.086 Total : 2521.47 9.85 795.86 124.31 12011.64 00:18:42.086 00:18:42.086 15:31:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:43.459 Initializing NVMe Controllers 00:18:43.459 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:43.459 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:43.459 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:43.459 Initialization complete. Launching workers. 00:18:43.459 ======================================================== 00:18:43.459 Latency(us) 00:18:43.460 Device Information : IOPS MiB/s Average min max 00:18:43.460 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5788.88 22.61 5531.32 888.10 15169.32 00:18:43.460 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2126.45 8.31 15173.76 5769.61 42073.85 00:18:43.460 ======================================================== 00:18:43.460 Total : 7915.33 30.92 8121.75 888.10 42073.85 00:18:43.460 00:18:43.460 15:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:43.460 15:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:45.992 Initializing NVMe Controllers 00:18:45.992 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:45.992 Controller IO queue size 128, less than required. 00:18:45.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:45.992 Controller IO queue size 128, less than required. 00:18:45.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:45.992 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:45.992 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:45.992 Initialization complete. Launching workers. 00:18:45.992 ======================================================== 00:18:45.992 Latency(us) 00:18:45.992 Device Information : IOPS MiB/s Average min max 00:18:45.992 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 949.71 237.43 143626.96 61279.87 328749.53 00:18:45.992 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 492.55 123.14 275922.88 91877.89 474318.33 00:18:45.992 ======================================================== 00:18:45.992 Total : 1442.26 360.56 188807.87 61279.87 474318.33 00:18:45.992 00:18:45.992 15:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:46.249 Initializing NVMe Controllers 00:18:46.249 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.249 Controller IO queue size 128, less than required. 00:18:46.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.249 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:46.249 Controller IO queue size 128, less than required. 00:18:46.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.249 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:46.250 WARNING: Some requested NVMe devices were skipped 00:18:46.250 No valid NVMe controllers or AIO or URING devices found 00:18:46.250 15:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:48.779 Initializing NVMe Controllers 00:18:48.779 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:48.779 Controller IO queue size 128, less than required. 00:18:48.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:48.779 Controller IO queue size 128, less than required. 00:18:48.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:48.779 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:48.779 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:48.779 Initialization complete. Launching workers. 00:18:48.779 00:18:48.779 ==================== 00:18:48.779 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:48.779 TCP transport: 00:18:48.779 polls: 11873 00:18:48.779 idle_polls: 5000 00:18:48.779 sock_completions: 6873 00:18:48.779 nvme_completions: 3595 00:18:48.779 submitted_requests: 5296 00:18:48.779 queued_requests: 1 00:18:48.779 00:18:48.779 ==================== 00:18:48.779 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:48.779 TCP transport: 00:18:48.779 polls: 11741 00:18:48.779 idle_polls: 8548 00:18:48.779 sock_completions: 3193 00:18:48.779 nvme_completions: 5267 00:18:48.779 submitted_requests: 7964 00:18:48.779 queued_requests: 1 00:18:48.779 ======================================================== 00:18:48.779 Latency(us) 00:18:48.779 Device Information : IOPS MiB/s Average min max 00:18:48.779 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 896.98 224.24 146906.54 76396.23 396725.80 00:18:48.779 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1314.27 328.57 97528.09 37143.69 194419.25 00:18:48.779 ======================================================== 00:18:48.779 Total : 2211.25 552.81 117558.13 37143.69 396725.80 00:18:48.779 00:18:48.779 15:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:49.037 15:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.295 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:49.296 rmmod nvme_tcp 00:18:49.296 rmmod nvme_fabrics 00:18:49.296 rmmod nvme_keyring 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 87249 ']' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 87249 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 87249 ']' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 87249 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87249 00:18:49.296 killing process with pid 87249 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87249' 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 87249 00:18:49.296 15:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 87249 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:50.228 ************************************ 00:18:50.228 END TEST nvmf_perf 00:18:50.228 ************************************ 00:18:50.228 00:18:50.228 real 0m15.077s 00:18:50.228 user 0m54.981s 00:18:50.228 sys 0m3.534s 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.228 ************************************ 00:18:50.228 START TEST nvmf_fio_host 00:18:50.228 ************************************ 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:50.228 * Looking for test storage... 00:18:50.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:50.228 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:50.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.487 --rc genhtml_branch_coverage=1 00:18:50.487 --rc genhtml_function_coverage=1 00:18:50.487 --rc genhtml_legend=1 00:18:50.487 --rc geninfo_all_blocks=1 00:18:50.487 --rc geninfo_unexecuted_blocks=1 00:18:50.487 00:18:50.487 ' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:50.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.487 --rc genhtml_branch_coverage=1 00:18:50.487 --rc genhtml_function_coverage=1 00:18:50.487 --rc genhtml_legend=1 00:18:50.487 --rc geninfo_all_blocks=1 00:18:50.487 --rc geninfo_unexecuted_blocks=1 00:18:50.487 00:18:50.487 ' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:50.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.487 --rc genhtml_branch_coverage=1 00:18:50.487 --rc genhtml_function_coverage=1 00:18:50.487 --rc genhtml_legend=1 00:18:50.487 --rc geninfo_all_blocks=1 00:18:50.487 --rc geninfo_unexecuted_blocks=1 00:18:50.487 00:18:50.487 ' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:50.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.487 --rc genhtml_branch_coverage=1 00:18:50.487 --rc genhtml_function_coverage=1 00:18:50.487 --rc genhtml_legend=1 00:18:50.487 --rc geninfo_all_blocks=1 00:18:50.487 --rc geninfo_unexecuted_blocks=1 00:18:50.487 00:18:50.487 ' 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.487 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:50.488 Cannot find device "nvmf_init_br" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:50.488 Cannot find device "nvmf_init_br2" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:50.488 Cannot find device "nvmf_tgt_br" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.488 Cannot find device "nvmf_tgt_br2" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:50.488 Cannot find device "nvmf_init_br" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:50.488 Cannot find device "nvmf_init_br2" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:50.488 Cannot find device "nvmf_tgt_br" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:50.488 Cannot find device "nvmf_tgt_br2" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:50.488 Cannot find device "nvmf_br" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:50.488 Cannot find device "nvmf_init_if" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:50.488 Cannot find device "nvmf_init_if2" 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.488 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.746 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:50.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:18:50.747 00:18:50.747 --- 10.0.0.3 ping statistics --- 00:18:50.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.747 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:50.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:50.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:18:50.747 00:18:50.747 --- 10.0.0.4 ping statistics --- 00:18:50.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.747 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:50.747 00:18:50.747 --- 10.0.0.1 ping statistics --- 00:18:50.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.747 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:50.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:50.747 00:18:50.747 --- 10.0.0.2 ping statistics --- 00:18:50.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.747 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:50.747 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87778 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87778 00:18:51.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 87778 ']' 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.004 15:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.004 [2024-10-01 15:31:49.981756] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:18:51.004 [2024-10-01 15:31:49.981860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.004 [2024-10-01 15:31:50.116947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.262 [2024-10-01 15:31:50.183118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.262 [2024-10-01 15:31:50.183310] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.262 [2024-10-01 15:31:50.183557] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.262 [2024-10-01 15:31:50.183750] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.262 [2024-10-01 15:31:50.183845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.262 [2024-10-01 15:31:50.184142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.262 [2024-10-01 15:31:50.184201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.262 [2024-10-01 15:31:50.184295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.262 [2024-10-01 15:31:50.184297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.194 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.194 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:52.194 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:52.194 [2024-10-01 15:31:51.332983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.452 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:52.452 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.452 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.452 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:52.713 Malloc1 00:18:52.713 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.971 15:31:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:53.229 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:53.487 [2024-10-01 15:31:52.561377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:53.487 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:53.745 15:31:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:54.004 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:54.004 fio-3.35 00:18:54.004 Starting 1 thread 00:18:56.532 00:18:56.532 test: (groupid=0, jobs=1): err= 0: pid=87909: Tue Oct 1 15:31:55 2024 00:18:56.532 read: IOPS=4612, BW=18.0MiB/s (18.9MB/s)(36.2MiB/2008msec) 00:18:56.532 slat (usec): min=2, max=1240, avg=10.05, stdev=14.57 00:18:56.532 clat (usec): min=6243, max=24038, avg=16422.66, stdev=2380.26 00:18:56.532 lat (usec): min=6245, max=24048, avg=16432.71, stdev=2380.15 00:18:56.532 clat percentiles (usec): 00:18:56.532 | 1.00th=[11338], 5.00th=[12911], 10.00th=[13435], 20.00th=[14091], 00:18:56.532 | 30.00th=[14877], 40.00th=[15664], 50.00th=[16450], 60.00th=[17171], 00:18:56.532 | 70.00th=[17957], 80.00th=[18744], 90.00th=[19530], 95.00th=[20317], 00:18:56.532 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22152], 99.95th=[23725], 00:18:56.532 | 99.99th=[23987] 00:18:56.532 bw ( KiB/s): min=17764, max=18664, per=99.58%, avg=18373.00, stdev=418.78, samples=4 00:18:56.532 iops : min= 4441, max= 4666, avg=4593.25, stdev=104.69, samples=4 00:18:56.532 write: IOPS=4614, BW=18.0MiB/s (18.9MB/s)(36.2MiB/2008msec); 0 zone resets 00:18:56.532 slat (usec): min=2, max=469, avg=10.10, stdev= 5.64 00:18:56.532 clat (usec): min=2728, max=19303, avg=11204.05, stdev=1398.94 00:18:56.532 lat (usec): min=2730, max=19312, avg=11214.15, stdev=1398.89 00:18:56.532 clat percentiles (usec): 00:18:56.532 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10028], 00:18:56.532 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:18:56.532 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13304], 00:18:56.532 | 99.00th=[13960], 99.50th=[14353], 99.90th=[16909], 99.95th=[18220], 00:18:56.532 | 99.99th=[19268] 00:18:56.532 bw ( KiB/s): min=18080, max=18834, per=99.74%, avg=18410.50, stdev=328.46, samples=4 00:18:56.532 iops : min= 4520, max= 4708, avg=4602.50, stdev=81.90, samples=4 00:18:56.532 lat (msec) : 4=0.04%, 10=10.25%, 20=86.30%, 50=3.41% 00:18:56.532 cpu : usr=79.47%, sys=12.76%, ctx=6, majf=0, minf=6 00:18:56.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:18:56.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.532 issued rwts: total=9262,9266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.532 00:18:56.532 Run status group 0 (all jobs): 00:18:56.532 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=36.2MiB (37.9MB), run=2008-2008msec 00:18:56.532 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=36.2MiB (38.0MB), run=2008-2008msec 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:56.532 15:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:56.532 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:56.532 fio-3.35 00:18:56.532 Starting 1 thread 00:18:59.065 00:18:59.065 test: (groupid=0, jobs=1): err= 0: pid=87952: Tue Oct 1 15:31:57 2024 00:18:59.065 read: IOPS=3753, BW=58.6MiB/s (61.5MB/s)(118MiB/2012msec) 00:18:59.065 slat (usec): min=3, max=123, avg= 4.70, stdev= 3.04 00:18:59.065 clat (usec): min=2586, max=46735, avg=18852.07, stdev=6860.68 00:18:59.065 lat (usec): min=2589, max=46739, avg=18856.78, stdev=6861.10 00:18:59.065 clat percentiles (usec): 00:18:59.065 | 1.00th=[ 6194], 5.00th=[ 8848], 10.00th=[10421], 20.00th=[12125], 00:18:59.065 | 30.00th=[13960], 40.00th=[16909], 50.00th=[19268], 60.00th=[20579], 00:18:59.065 | 70.00th=[22414], 80.00th=[24249], 90.00th=[26346], 95.00th=[32113], 00:18:59.065 | 99.00th=[38536], 99.50th=[40109], 99.90th=[40633], 99.95th=[41681], 00:18:59.065 | 99.99th=[46924] 00:18:59.065 bw ( KiB/s): min=27040, max=34272, per=51.14%, avg=30712.00, stdev=3362.02, samples=4 00:18:59.065 iops : min= 1690, max= 2142, avg=1919.50, stdev=210.13, samples=4 00:18:59.065 write: IOPS=2138, BW=33.4MiB/s (35.0MB/s)(63.4MiB/1898msec); 0 zone resets 00:18:59.065 slat (usec): min=37, max=435, avg=41.31, stdev= 8.51 00:18:59.065 clat (usec): min=9150, max=53049, avg=27618.65, stdev=8102.23 00:18:59.065 lat (usec): min=9195, max=53087, avg=27659.95, stdev=8102.34 00:18:59.065 clat percentiles (usec): 00:18:59.065 | 1.00th=[10552], 5.00th=[12256], 10.00th=[14353], 20.00th=[18744], 00:18:59.065 | 30.00th=[25297], 40.00th=[27919], 50.00th=[29754], 60.00th=[31065], 00:18:59.065 | 70.00th=[32900], 80.00th=[34341], 90.00th=[36439], 95.00th=[38011], 00:18:59.065 | 99.00th=[41681], 99.50th=[43254], 99.90th=[48497], 99.95th=[50070], 00:18:59.065 | 99.99th=[53216] 00:18:59.065 bw ( KiB/s): min=28640, max=36960, per=93.71%, avg=32064.00, stdev=4104.82, samples=4 00:18:59.065 iops : min= 1790, max= 2310, avg=2004.00, stdev=256.55, samples=4 00:18:59.065 lat (msec) : 4=0.14%, 10=5.62%, 20=38.08%, 50=56.15%, 100=0.02% 00:18:59.065 cpu : usr=80.86%, sys=15.81%, ctx=19, majf=0, minf=19 00:18:59.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:59.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.065 issued rwts: total=7552,4059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.065 00:18:59.065 Run status group 0 (all jobs): 00:18:59.065 READ: bw=58.6MiB/s (61.5MB/s), 58.6MiB/s-58.6MiB/s (61.5MB/s-61.5MB/s), io=118MiB (124MB), run=2012-2012msec 00:18:59.065 WRITE: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=63.4MiB (66.5MB), run=1898-1898msec 00:18:59.065 15:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.065 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:59.065 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:59.065 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:59.065 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:59.065 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:59.065 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.323 rmmod nvme_tcp 00:18:59.323 rmmod nvme_fabrics 00:18:59.323 rmmod nvme_keyring 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 87778 ']' 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 87778 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 87778 ']' 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 87778 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87778 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.323 killing process with pid 87778 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87778' 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 87778 00:18:59.323 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 87778 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:59.581 00:18:59.581 real 0m9.419s 00:18:59.581 user 0m38.263s 00:18:59.581 sys 0m2.073s 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.581 15:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.581 ************************************ 00:18:59.581 END TEST nvmf_fio_host 00:18:59.581 ************************************ 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.839 ************************************ 00:18:59.839 START TEST nvmf_failover 00:18:59.839 ************************************ 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:59.839 * Looking for test storage... 00:18:59.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.839 --rc genhtml_branch_coverage=1 00:18:59.839 --rc genhtml_function_coverage=1 00:18:59.839 --rc genhtml_legend=1 00:18:59.839 --rc geninfo_all_blocks=1 00:18:59.839 --rc geninfo_unexecuted_blocks=1 00:18:59.839 00:18:59.839 ' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.839 --rc genhtml_branch_coverage=1 00:18:59.839 --rc genhtml_function_coverage=1 00:18:59.839 --rc genhtml_legend=1 00:18:59.839 --rc geninfo_all_blocks=1 00:18:59.839 --rc geninfo_unexecuted_blocks=1 00:18:59.839 00:18:59.839 ' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.839 --rc genhtml_branch_coverage=1 00:18:59.839 --rc genhtml_function_coverage=1 00:18:59.839 --rc genhtml_legend=1 00:18:59.839 --rc geninfo_all_blocks=1 00:18:59.839 --rc geninfo_unexecuted_blocks=1 00:18:59.839 00:18:59.839 ' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.839 --rc genhtml_branch_coverage=1 00:18:59.839 --rc genhtml_function_coverage=1 00:18:59.839 --rc genhtml_legend=1 00:18:59.839 --rc geninfo_all_blocks=1 00:18:59.839 --rc geninfo_unexecuted_blocks=1 00:18:59.839 00:18:59.839 ' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.839 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:59.839 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:59.840 15:31:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:00.097 Cannot find device "nvmf_init_br" 00:19:00.097 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:00.097 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:00.097 Cannot find device "nvmf_init_br2" 00:19:00.097 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:00.098 Cannot find device "nvmf_tgt_br" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.098 Cannot find device "nvmf_tgt_br2" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:00.098 Cannot find device "nvmf_init_br" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:00.098 Cannot find device "nvmf_init_br2" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:00.098 Cannot find device "nvmf_tgt_br" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:00.098 Cannot find device "nvmf_tgt_br2" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:00.098 Cannot find device "nvmf_br" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:00.098 Cannot find device "nvmf_init_if" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:00.098 Cannot find device "nvmf_init_if2" 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.098 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:00.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:19:00.356 00:19:00.356 --- 10.0.0.3 ping statistics --- 00:19:00.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.356 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:19:00.356 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:00.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:00.357 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:00.357 00:19:00.357 --- 10.0.0.4 ping statistics --- 00:19:00.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.357 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:00.357 00:19:00.357 --- 10.0.0.1 ping statistics --- 00:19:00.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.357 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:00.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:00.357 00:19:00.357 --- 10.0.0.2 ping statistics --- 00:19:00.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.357 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=88232 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 88232 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 88232 ']' 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.357 15:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.357 [2024-10-01 15:31:59.439838] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:19:00.357 [2024-10-01 15:31:59.439966] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.615 [2024-10-01 15:31:59.593249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.615 [2024-10-01 15:31:59.680331] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.615 [2024-10-01 15:31:59.680400] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.615 [2024-10-01 15:31:59.680417] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.615 [2024-10-01 15:31:59.680449] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.615 [2024-10-01 15:31:59.680460] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.615 [2024-10-01 15:31:59.680818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.615 [2024-10-01 15:31:59.681050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.615 [2024-10-01 15:31:59.681064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.548 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:01.806 [2024-10-01 15:32:00.740025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.806 15:32:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:02.063 Malloc0 00:19:02.063 15:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.627 15:32:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:03.190 15:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:03.448 [2024-10-01 15:32:02.521477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:03.448 15:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:04.012 [2024-10-01 15:32:02.914014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:04.012 15:32:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:04.269 [2024-10-01 15:32:03.274293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88355 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88355 /var/tmp/bdevperf.sock 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 88355 ']' 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.269 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:04.527 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.527 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:04.527 15:32:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:05.092 NVMe0n1 00:19:05.092 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:05.351 00:19:05.351 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88389 00:19:05.351 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.351 15:32:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:06.286 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:06.853 [2024-10-01 15:32:05.724826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.724995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 [2024-10-01 15:32:05.725131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0a90 is same with the state(6) to be set 00:19:06.853 15:32:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:10.136 15:32:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:10.136 00:19:10.136 15:32:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:10.396 [2024-10-01 15:32:09.467232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.396 [2024-10-01 15:32:09.467791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 [2024-10-01 15:32:09.467905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d16f0 is same with the state(6) to be set 00:19:10.397 15:32:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:13.699 15:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:13.699 [2024-10-01 15:32:12.800262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:13.699 15:32:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:15.076 15:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:15.076 [2024-10-01 15:32:14.124434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.076 [2024-10-01 15:32:14.124612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.124992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.077 [2024-10-01 15:32:14.125332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 [2024-10-01 15:32:14.125511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d24a0 is same with the state(6) to be set 00:19:15.078 15:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88389 00:19:21.645 { 00:19:21.645 "results": [ 00:19:21.645 { 00:19:21.645 "job": "NVMe0n1", 00:19:21.645 "core_mask": "0x1", 00:19:21.645 "workload": "verify", 00:19:21.645 "status": "finished", 00:19:21.645 "verify_range": { 00:19:21.645 "start": 0, 00:19:21.645 "length": 16384 00:19:21.645 }, 00:19:21.645 "queue_depth": 128, 00:19:21.645 "io_size": 4096, 00:19:21.645 "runtime": 15.012247, 00:19:21.645 "iops": 8745.12656233274, 00:19:21.645 "mibps": 34.160650634112265, 00:19:21.645 "io_failed": 3277, 00:19:21.645 "io_timeout": 0, 00:19:21.645 "avg_latency_us": 14248.197716615174, 00:19:21.645 "min_latency_us": 625.5709090909091, 00:19:21.645 "max_latency_us": 21567.30181818182 00:19:21.645 } 00:19:21.645 ], 00:19:21.645 "core_count": 1 00:19:21.645 } 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88355 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 88355 ']' 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 88355 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88355 00:19:21.645 killing process with pid 88355 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88355' 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 88355 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 88355 00:19:21.645 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.645 [2024-10-01 15:32:03.363089] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:19:21.645 [2024-10-01 15:32:03.363238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88355 ] 00:19:21.645 [2024-10-01 15:32:03.501045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.645 [2024-10-01 15:32:03.568376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.645 Running I/O for 15 seconds... 00:19:21.645 8757.00 IOPS, 34.21 MiB/s [2024-10-01 15:32:05.726110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.726981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.726995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.645 [2024-10-01 15:32:05.727238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.645 [2024-10-01 15:32:05.727553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.645 [2024-10-01 15:32:05.727569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.727967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.727982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.727997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.728971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.728985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.646 [2024-10-01 15:32:05.729264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.646 [2024-10-01 15:32:05.729280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.646 [2024-10-01 15:32:05.729294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.729972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.729987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.647 [2024-10-01 15:32:05.730202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.647 [2024-10-01 15:32:05.730252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.647 [2024-10-01 15:32:05.730264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84520 len:8 PRP1 0x0 PRP2 0x0 00:19:21.647 [2024-10-01 15:32:05.730278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730327] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb6ed0 was disconnected and freed. reset controller. 00:19:21.647 [2024-10-01 15:32:05.730346] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:21.647 [2024-10-01 15:32:05.730401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.647 [2024-10-01 15:32:05.730437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.647 [2024-10-01 15:32:05.730470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.647 [2024-10-01 15:32:05.730498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.647 [2024-10-01 15:32:05.730527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:05.730543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.647 [2024-10-01 15:32:05.730595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f442c0 (9): Bad file descriptor 00:19:21.647 [2024-10-01 15:32:05.734524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.647 [2024-10-01 15:32:05.767157] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.647 8660.50 IOPS, 33.83 MiB/s 8723.33 IOPS, 34.08 MiB/s 8746.50 IOPS, 34.17 MiB/s [2024-10-01 15:32:09.468224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.647 [2024-10-01 15:32:09.468781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.647 [2024-10-01 15:32:09.468796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.468812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.468827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.468844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.468869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.468888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.468902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.468919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.468933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.468949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.468963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.468979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.468993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.469978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.648 [2024-10-01 15:32:09.470219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.648 [2024-10-01 15:32:09.470234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.649 [2024-10-01 15:32:09.470634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.470975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.470991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.649 [2024-10-01 15:32:09.471971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.649 [2024-10-01 15:32:09.471987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.650 [2024-10-01 15:32:09.472001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.650 [2024-10-01 15:32:09.472031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.650 [2024-10-01 15:32:09.472061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.650 [2024-10-01 15:32:09.472091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.650 [2024-10-01 15:32:09.472121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:09.472150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:09.472180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:09.472210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:09.472240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:09.472270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9000 is same with the state(6) to be set 00:19:21.650 [2024-10-01 15:32:09.472303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.650 [2024-10-01 15:32:09.472320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.650 [2024-10-01 15:32:09.472335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90264 len:8 PRP1 0x0 PRP2 0x0 00:19:21.650 [2024-10-01 15:32:09.472349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472409] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb9000 was disconnected and freed. reset controller. 00:19:21.650 [2024-10-01 15:32:09.472442] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:21.650 [2024-10-01 15:32:09.472498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.650 [2024-10-01 15:32:09.472521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.650 [2024-10-01 15:32:09.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.650 [2024-10-01 15:32:09.472579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.650 [2024-10-01 15:32:09.472607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:09.472621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.650 [2024-10-01 15:32:09.472669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f442c0 (9): Bad file descriptor 00:19:21.650 [2024-10-01 15:32:09.476622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.650 [2024-10-01 15:32:09.516297] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.650 8652.00 IOPS, 33.80 MiB/s 8684.33 IOPS, 33.92 MiB/s 8703.71 IOPS, 34.00 MiB/s 8729.88 IOPS, 34.10 MiB/s 8748.11 IOPS, 34.17 MiB/s [2024-10-01 15:32:14.126167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.126979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.126992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.650 [2024-10-01 15:32:14.127375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.650 [2024-10-01 15:32:14.127389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.127977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.127994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.651 [2024-10-01 15:32:14.128473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.651 [2024-10-01 15:32:14.128903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.651 [2024-10-01 15:32:14.128917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.128933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.128946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.128962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.128976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.128991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.129977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.129991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.652 [2024-10-01 15:32:14.130199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.652 [2024-10-01 15:32:14.130251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.652 [2024-10-01 15:32:14.130262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 PRP1 0x0 PRP2 0x0 00:19:21.652 [2024-10-01 15:32:14.130276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130327] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb9a20 was disconnected and freed. reset controller. 00:19:21.652 [2024-10-01 15:32:14.130345] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:21.652 [2024-10-01 15:32:14.130399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.652 [2024-10-01 15:32:14.130436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.652 [2024-10-01 15:32:14.130482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.652 [2024-10-01 15:32:14.130510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.652 [2024-10-01 15:32:14.130538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.652 [2024-10-01 15:32:14.130552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.652 [2024-10-01 15:32:14.134522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.652 [2024-10-01 15:32:14.134567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f442c0 (9): Bad file descriptor 00:19:21.652 [2024-10-01 15:32:14.173084] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.652 8718.20 IOPS, 34.06 MiB/s 8724.27 IOPS, 34.08 MiB/s 8733.25 IOPS, 34.11 MiB/s 8733.92 IOPS, 34.12 MiB/s 8744.00 IOPS, 34.16 MiB/s 8743.73 IOPS, 34.16 MiB/s 00:19:21.652 Latency(us) 00:19:21.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:21.652 Verification LBA range: start 0x0 length 0x4000 00:19:21.652 NVMe0n1 : 15.01 8745.13 34.16 218.29 0.00 14248.20 625.57 21567.30 00:19:21.652 =================================================================================================================== 00:19:21.652 Total : 8745.13 34.16 218.29 0.00 14248.20 625.57 21567.30 00:19:21.652 Received shutdown signal, test time was about 15.000000 seconds 00:19:21.652 00:19:21.652 Latency(us) 00:19:21.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.652 =================================================================================================================== 00:19:21.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.652 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:21.652 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:21.652 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:21.652 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88594 00:19:21.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88594 /var/tmp/bdevperf.sock 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 88594 ']' 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.653 15:32:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:21.653 15:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:21.653 15:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:21.653 15:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:21.653 [2024-10-01 15:32:20.408097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:21.653 15:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:21.653 [2024-10-01 15:32:20.728369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:21.653 15:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:22.221 NVMe0n1 00:19:22.221 15:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:22.479 00:19:22.479 15:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:23.046 00:19:23.046 15:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:23.046 15:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:23.305 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.563 15:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:26.873 15:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:26.873 15:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.873 15:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88723 00:19:26.873 15:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:26.873 15:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88723 00:19:28.249 { 00:19:28.249 "results": [ 00:19:28.249 { 00:19:28.249 "job": "NVMe0n1", 00:19:28.249 "core_mask": "0x1", 00:19:28.249 "workload": "verify", 00:19:28.249 "status": "finished", 00:19:28.249 "verify_range": { 00:19:28.249 "start": 0, 00:19:28.249 "length": 16384 00:19:28.249 }, 00:19:28.249 "queue_depth": 128, 00:19:28.249 "io_size": 4096, 00:19:28.249 "runtime": 1.010842, 00:19:28.249 "iops": 8866.86544484697, 00:19:28.249 "mibps": 34.636193143933475, 00:19:28.249 "io_failed": 0, 00:19:28.249 "io_timeout": 0, 00:19:28.249 "avg_latency_us": 14357.19685089205, 00:19:28.249 "min_latency_us": 2263.970909090909, 00:19:28.249 "max_latency_us": 14239.185454545455 00:19:28.249 } 00:19:28.249 ], 00:19:28.249 "core_count": 1 00:19:28.249 } 00:19:28.249 15:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:28.249 [2024-10-01 15:32:19.836521] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:19:28.249 [2024-10-01 15:32:19.836649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88594 ] 00:19:28.249 [2024-10-01 15:32:19.973966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.249 [2024-10-01 15:32:20.033210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.249 [2024-10-01 15:32:22.498836] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:28.249 [2024-10-01 15:32:22.498974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.249 [2024-10-01 15:32:22.499001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.249 [2024-10-01 15:32:22.499020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.249 [2024-10-01 15:32:22.499035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.249 [2024-10-01 15:32:22.499049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.249 [2024-10-01 15:32:22.499063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.249 [2024-10-01 15:32:22.499077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.249 [2024-10-01 15:32:22.499091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.249 [2024-10-01 15:32:22.499105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.249 [2024-10-01 15:32:22.499148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.249 [2024-10-01 15:32:22.499180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c02c0 (9): Bad file descriptor 00:19:28.249 [2024-10-01 15:32:22.506660] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.249 Running I/O for 1 seconds... 00:19:28.249 8835.00 IOPS, 34.51 MiB/s 00:19:28.249 Latency(us) 00:19:28.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.249 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.249 Verification LBA range: start 0x0 length 0x4000 00:19:28.249 NVMe0n1 : 1.01 8866.87 34.64 0.00 0.00 14357.20 2263.97 14239.19 00:19:28.249 =================================================================================================================== 00:19:28.249 Total : 8866.87 34.64 0.00 0.00 14357.20 2263.97 14239.19 00:19:28.249 15:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.249 15:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:28.249 15:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.508 15:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.508 15:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:29.074 15:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.333 15:32:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88594 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 88594 ']' 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 88594 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88594 00:19:32.633 killing process with pid 88594 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88594' 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 88594 00:19:32.633 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 88594 00:19:32.892 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:32.892 15:32:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.150 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:33.150 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.150 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:33.150 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:33.150 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:33.151 rmmod nvme_tcp 00:19:33.151 rmmod nvme_fabrics 00:19:33.151 rmmod nvme_keyring 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 88232 ']' 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 88232 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 88232 ']' 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 88232 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.151 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88232 00:19:33.409 killing process with pid 88232 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88232' 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 88232 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 88232 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:33.409 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:33.667 00:19:33.667 real 0m34.031s 00:19:33.667 user 2m12.318s 00:19:33.667 sys 0m4.714s 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.667 ************************************ 00:19:33.667 END TEST nvmf_failover 00:19:33.667 ************************************ 00:19:33.667 15:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:33.925 15:32:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.926 ************************************ 00:19:33.926 START TEST nvmf_host_discovery 00:19:33.926 ************************************ 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:33.926 * Looking for test storage... 00:19:33.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:33.926 15:32:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:33.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.926 --rc genhtml_branch_coverage=1 00:19:33.926 --rc genhtml_function_coverage=1 00:19:33.926 --rc genhtml_legend=1 00:19:33.926 --rc geninfo_all_blocks=1 00:19:33.926 --rc geninfo_unexecuted_blocks=1 00:19:33.926 00:19:33.926 ' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:33.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.926 --rc genhtml_branch_coverage=1 00:19:33.926 --rc genhtml_function_coverage=1 00:19:33.926 --rc genhtml_legend=1 00:19:33.926 --rc geninfo_all_blocks=1 00:19:33.926 --rc geninfo_unexecuted_blocks=1 00:19:33.926 00:19:33.926 ' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:33.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.926 --rc genhtml_branch_coverage=1 00:19:33.926 --rc genhtml_function_coverage=1 00:19:33.926 --rc genhtml_legend=1 00:19:33.926 --rc geninfo_all_blocks=1 00:19:33.926 --rc geninfo_unexecuted_blocks=1 00:19:33.926 00:19:33.926 ' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:33.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.926 --rc genhtml_branch_coverage=1 00:19:33.926 --rc genhtml_function_coverage=1 00:19:33.926 --rc genhtml_legend=1 00:19:33.926 --rc geninfo_all_blocks=1 00:19:33.926 --rc geninfo_unexecuted_blocks=1 00:19:33.926 00:19:33.926 ' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=425da7d6-2e40-4e0d-b2ef-fba0474bdabf 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.926 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.927 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:34.185 Cannot find device "nvmf_init_br" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:34.185 Cannot find device "nvmf_init_br2" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:34.185 Cannot find device "nvmf_tgt_br" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.185 Cannot find device "nvmf_tgt_br2" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:34.185 Cannot find device "nvmf_init_br" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:34.185 Cannot find device "nvmf_init_br2" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:34.185 Cannot find device "nvmf_tgt_br" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:34.185 Cannot find device "nvmf_tgt_br2" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:34.185 Cannot find device "nvmf_br" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:34.185 Cannot find device "nvmf_init_if" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:34.185 Cannot find device "nvmf_init_if2" 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:34.185 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:34.443 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:34.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:34.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:34.443 00:19:34.444 --- 10.0.0.3 ping statistics --- 00:19:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.444 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:34.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:34.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:19:34.444 00:19:34.444 --- 10.0.0.4 ping statistics --- 00:19:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.444 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:34.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:34.444 00:19:34.444 --- 10.0.0.1 ping statistics --- 00:19:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.444 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:34.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:34.444 00:19:34.444 --- 10.0.0.2 ping statistics --- 00:19:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.444 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=89076 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 89076 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 89076 ']' 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.444 15:32:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.444 [2024-10-01 15:32:33.605008] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:19:34.444 [2024-10-01 15:32:33.605161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.702 [2024-10-01 15:32:33.748444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.702 [2024-10-01 15:32:33.819858] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.702 [2024-10-01 15:32:33.819927] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.702 [2024-10-01 15:32:33.819949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.702 [2024-10-01 15:32:33.819964] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.702 [2024-10-01 15:32:33.819978] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.702 [2024-10-01 15:32:33.820018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 [2024-10-01 15:32:34.647801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 [2024-10-01 15:32:34.655890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 null0 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 null1 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89132 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89132 /tmp/host.sock 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 89132 ']' 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.681 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.681 15:32:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.681 [2024-10-01 15:32:34.748832] Starting SPDK v25.01-pre git sha1 f15f2a1dd / DPDK 24.03.0 initialization... 00:19:35.681 [2024-10-01 15:32:34.748970] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89132 ] 00:19:35.939 [2024-10-01 15:32:34.914396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.939 [2024-10-01 15:32:35.012366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.939 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.939 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:35.939 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.939 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:35.939 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.939 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.196 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.454 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.455 [2024-10-01 15:32:35.504102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.455 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.714 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:36.715 15:32:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:36.973 [2024-10-01 15:32:36.130957] bdev_nvme.c:7152:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:36.973 [2024-10-01 15:32:36.131005] bdev_nvme.c:7232:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:36.973 [2024-10-01 15:32:36.131027] bdev_nvme.c:7115:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:37.231 [2024-10-01 15:32:36.217131] bdev_nvme.c:7081:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:37.231 [2024-10-01 15:32:36.274236] bdev_nvme.c:6971:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:37.231 [2024-10-01 15:32:36.274291] bdev_nvme.c:6930:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.797 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:38.057 15:32:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.057 [2024-10-01 15:32:37.084992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:38.057 [2024-10-01 15:32:37.086122] bdev_nvme.c:7134:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:38.057 [2024-10-01 15:32:37.086166] bdev_nvme.c:7115:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:38.057 [2024-10-01 15:32:37.171604] bdev_nvme.c:7076:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:38.057 [2024-10-01 15:32:37.171654] bdev_nvme.c:7094:discovery_log_page_cb: *ERROR*: Discovery[10.0.0.3:8009] spdk_bdev_nvme_create failed (Invalid argument) 00:19:38.057 [2024-10-01 15:32:37.171678] bdev_nvme.c:6930:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:38.057 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.315 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:38.315 15:32:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:39.250 15:32:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:40.233 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.491 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:40.491 15:32:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.427 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.428 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:41.428 15:32:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:42.362 15:32:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:43.746 15:32:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:44.682 15:32:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:45.619 15:32:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:46.554 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:46.554 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:46.554 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:46.814 15:32:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:47.750 15:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 1 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # trap - ERR 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # print_backtrace 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # local args 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1157 -- # xtrace_disable 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.684 ========== Backtrace start: ========== 00:19:48.684 00:19:48.684 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh:122 -> main(["--transport=tcp"]) 00:19:48.684 ... 00:19:48.684 117 # we should see a second path on the nvme0 subsystem now. 00:19:48.684 118 $rpc_py nvmf_subsystem_add_listener ${NQN}0 -t $TEST_TRANSPORT -a $NVMF_FIRST_TARGET_IP -s $NVMF_SECOND_PORT 00:19:48.684 119 # Wait a bit to make sure the discovery service has a chance to detect the changes 00:19:48.684 120 waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:48.684 121 waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:48.684 => 122 waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:48.684 123 is_notification_count_eq 0 00:19:48.684 124 00:19:48.684 125 # Remove the listener for the first port. The subsystem and bdevs should stay, but we should see 00:19:48.684 126 # the path to that first port disappear. 00:19:48.684 127 $rpc_py nvmf_subsystem_remove_listener ${NQN}0 -t $TEST_TRANSPORT -a $NVMF_FIRST_TARGET_IP -s $NVMF_PORT 00:19:48.684 ... 00:19:48.684 00:19:48.684 ========== Backtrace end ========== 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1194 -- # return 0 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # process_shm --id 0 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@808 -- # type=--id 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@809 -- # id=0 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:48.684 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:48.942 nvmf_trace.0 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@823 -- # return 0 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # kill 89132 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # nvmftestfini 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:48.942 rmmod nvme_tcp 00:19:48.942 rmmod nvme_fabrics 00:19:48.942 rmmod nvme_keyring 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:48.942 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 89076 ']' 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 89076 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 89076 ']' 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 89076 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.943 15:32:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89076 00:19:48.943 killing process with pid 89076 00:19:48.943 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:48.943 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:48.943 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89076' 00:19:48.943 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 89076 00:19:48.943 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 89076 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:49.201 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # exit 1 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # trap - ERR 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # print_backtrace 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh' 'nvmf_host_discovery' '--transport=tcp') 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # local args 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1157 -- # xtrace_disable 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.460 ========== Backtrace start: ========== 00:19:49.460 00:19:49.460 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host_discovery"],["/home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh"],["--transport=tcp"]) 00:19:49.460 ... 00:19:49.460 1120 timing_enter $test_name 00:19:49.460 1121 echo "************************************" 00:19:49.460 1122 echo "START TEST $test_name" 00:19:49.460 1123 echo "************************************" 00:19:49.460 1124 xtrace_restore 00:19:49.460 1125 time "$@" 00:19:49.460 1126 xtrace_disable 00:19:49.460 1127 echo "************************************" 00:19:49.460 1128 echo "END TEST $test_name" 00:19:49.460 1129 echo "************************************" 00:19:49.460 1130 timing_exit $test_name 00:19:49.460 ... 00:19:49.460 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh:26 -> main(["--transport=tcp"]) 00:19:49.460 ... 00:19:49.460 21 00:19:49.460 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:19:49.460 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:19:49.460 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:19:49.460 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:19:49.460 => 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:19:49.460 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:19:49.460 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:19:49.460 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:19:49.460 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:19:49.460 31 00:19:49.460 ... 00:19:49.460 00:19:49.460 ========== Backtrace end ========== 00:19:49.460 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1194 -- # return 0 00:19:49.460 00:19:49.460 real 0m15.584s 00:19:49.460 user 0m29.440s 00:19:49.460 sys 0m2.017s 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1 -- # exit 1 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.461 ========== Backtrace start: ========== 00:19:49.461 00:19:49.461 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:19:49.461 ... 00:19:49.461 1120 timing_enter $test_name 00:19:49.461 1121 echo "************************************" 00:19:49.461 1122 echo "START TEST $test_name" 00:19:49.461 1123 echo "************************************" 00:19:49.461 1124 xtrace_restore 00:19:49.461 1125 time "$@" 00:19:49.461 1126 xtrace_disable 00:19:49.461 1127 echo "************************************" 00:19:49.461 1128 echo "END TEST $test_name" 00:19:49.461 1129 echo "************************************" 00:19:49.461 1130 timing_exit $test_name 00:19:49.461 ... 00:19:49.461 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:19:49.461 ... 00:19:49.461 11 exit 0 00:19:49.461 12 fi 00:19:49.461 13 00:19:49.461 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 17 00:19:49.461 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:19:49.461 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:19:49.461 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:19:49.461 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:19:49.461 ... 00:19:49.461 00:19:49.461 ========== Backtrace end ========== 00:19:49.461 15:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:19:49.461 00:19:49.461 real 1m27.363s 00:19:49.461 user 4m43.867s 00:19:49.461 sys 0m15.837s 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.461 ========== Backtrace start: ========== 00:19:49.461 00:19:49.461 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:19:49.461 ... 00:19:49.461 1120 timing_enter $test_name 00:19:49.461 1121 echo "************************************" 00:19:49.461 1122 echo "START TEST $test_name" 00:19:49.461 1123 echo "************************************" 00:19:49.461 1124 xtrace_restore 00:19:49.461 1125 time "$@" 00:19:49.461 1126 xtrace_disable 00:19:49.461 1127 echo "************************************" 00:19:49.461 1128 echo "END TEST $test_name" 00:19:49.461 1129 echo "************************************" 00:19:49.461 1130 timing_exit $test_name 00:19:49.461 ... 00:19:49.461 in /home/vagrant/spdk_repo/spdk/autotest.sh:280 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:19:49.461 ... 00:19:49.461 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:19:49.461 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:19:49.461 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:19:49.461 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:19:49.461 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:49.461 284 fi 00:19:49.461 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:19:49.461 ... 00:19:49.461 00:19:49.461 ========== Backtrace end ========== 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:19:49.461 00:19:49.461 real 12m41.411s 00:19:49.461 user 34m41.793s 00:19:49.461 sys 2m41.833s 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:49.461 15:32:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.663 INFO: APP EXITING 00:20:01.663 INFO: killing all VMs 00:20:01.663 INFO: killing vhost app 00:20:01.663 INFO: EXIT DONE 00:20:01.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.663 Waiting for block devices as requested 00:20:01.663 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:01.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:02.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:02.535 Cleaning 00:20:02.535 Removing: /var/run/dpdk/spdk0/config 00:20:02.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:02.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:02.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:02.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:02.535 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:02.535 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:02.535 Removing: /var/run/dpdk/spdk1/config 00:20:02.535 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:02.535 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:02.535 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:02.535 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:02.535 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:02.535 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:02.535 Removing: /var/run/dpdk/spdk2/config 00:20:02.535 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:02.535 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:02.535 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:02.535 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:02.535 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:02.535 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:02.535 Removing: /var/run/dpdk/spdk3/config 00:20:02.535 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:02.535 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:02.535 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:02.535 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:02.535 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:02.535 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:02.535 Removing: /var/run/dpdk/spdk4/config 00:20:02.535 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:02.535 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:02.535 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:02.535 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:02.535 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:02.535 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:02.535 Removing: /dev/shm/nvmf_trace.0 00:20:02.535 Removing: /dev/shm/spdk_tgt_trace.pid58698 00:20:02.535 Removing: /var/run/dpdk/spdk0 00:20:02.535 Removing: /var/run/dpdk/spdk1 00:20:02.535 Removing: /var/run/dpdk/spdk2 00:20:02.535 Removing: /var/run/dpdk/spdk3 00:20:02.535 Removing: /var/run/dpdk/spdk4 00:20:02.535 Removing: /var/run/dpdk/spdk_pid58550 00:20:02.535 Removing: /var/run/dpdk/spdk_pid58698 00:20:02.535 Removing: /var/run/dpdk/spdk_pid58967 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59054 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59080 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59189 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59206 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59340 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59636 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59820 00:20:02.535 Removing: /var/run/dpdk/spdk_pid59903 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60000 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60084 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60128 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60158 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60232 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60337 00:20:02.535 Removing: /var/run/dpdk/spdk_pid60969 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61033 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61089 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61117 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61196 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61206 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61284 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61312 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61369 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61380 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61430 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61448 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61607 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61638 00:20:02.535 Removing: /var/run/dpdk/spdk_pid61717 00:20:02.535 Removing: /var/run/dpdk/spdk_pid62180 00:20:02.535 Removing: /var/run/dpdk/spdk_pid62544 00:20:02.535 Removing: /var/run/dpdk/spdk_pid64988 00:20:02.535 Removing: /var/run/dpdk/spdk_pid65034 00:20:02.535 Removing: /var/run/dpdk/spdk_pid65399 00:20:02.535 Removing: /var/run/dpdk/spdk_pid65445 00:20:02.535 Removing: /var/run/dpdk/spdk_pid65849 00:20:02.535 Removing: /var/run/dpdk/spdk_pid66427 00:20:02.535 Removing: /var/run/dpdk/spdk_pid66876 00:20:02.535 Removing: /var/run/dpdk/spdk_pid67898 00:20:02.535 Removing: /var/run/dpdk/spdk_pid68968 00:20:02.535 Removing: /var/run/dpdk/spdk_pid69085 00:20:02.535 Removing: /var/run/dpdk/spdk_pid69154 00:20:02.535 Removing: /var/run/dpdk/spdk_pid70780 00:20:02.535 Removing: /var/run/dpdk/spdk_pid71110 00:20:02.535 Removing: /var/run/dpdk/spdk_pid74951 00:20:02.535 Removing: /var/run/dpdk/spdk_pid75385 00:20:02.535 Removing: /var/run/dpdk/spdk_pid76016 00:20:02.535 Removing: /var/run/dpdk/spdk_pid76469 00:20:02.535 Removing: /var/run/dpdk/spdk_pid82503 00:20:02.535 Removing: /var/run/dpdk/spdk_pid83032 00:20:02.535 Removing: /var/run/dpdk/spdk_pid83140 00:20:02.535 Removing: /var/run/dpdk/spdk_pid83281 00:20:02.793 Removing: /var/run/dpdk/spdk_pid83323 00:20:02.793 Removing: /var/run/dpdk/spdk_pid83362 00:20:02.793 Removing: /var/run/dpdk/spdk_pid83420 00:20:02.793 Removing: /var/run/dpdk/spdk_pid83591 00:20:02.793 Removing: /var/run/dpdk/spdk_pid83739 00:20:02.793 Removing: /var/run/dpdk/spdk_pid84015 00:20:02.793 Removing: /var/run/dpdk/spdk_pid84132 00:20:02.793 Removing: /var/run/dpdk/spdk_pid84395 00:20:02.793 Removing: /var/run/dpdk/spdk_pid84520 00:20:02.793 Removing: /var/run/dpdk/spdk_pid84661 00:20:02.793 Removing: /var/run/dpdk/spdk_pid85045 00:20:02.793 Removing: /var/run/dpdk/spdk_pid85495 00:20:02.793 Removing: /var/run/dpdk/spdk_pid85496 00:20:02.793 Removing: /var/run/dpdk/spdk_pid85497 00:20:02.793 Removing: /var/run/dpdk/spdk_pid85767 00:20:02.793 Removing: /var/run/dpdk/spdk_pid86099 00:20:02.793 Removing: /var/run/dpdk/spdk_pid86422 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87022 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87024 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87418 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87436 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87450 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87484 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87489 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87905 00:20:02.793 Removing: /var/run/dpdk/spdk_pid87948 00:20:02.793 Removing: /var/run/dpdk/spdk_pid88355 00:20:02.793 Removing: /var/run/dpdk/spdk_pid88594 00:20:02.793 Removing: /var/run/dpdk/spdk_pid89132 00:20:02.793 Clean 00:20:10.898 15:33:08 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:20:10.898 15:33:08 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:20:10.898 15:33:08 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:20:10.908 [Pipeline] } 00:20:10.925 [Pipeline] // timeout 00:20:10.933 [Pipeline] } 00:20:10.948 [Pipeline] // stage 00:20:10.955 [Pipeline] } 00:20:10.959 ERROR: script returned exit code 1 00:20:10.959 Setting overall build result to FAILURE 00:20:10.973 [Pipeline] // catchError 00:20:10.981 [Pipeline] stage 00:20:10.983 [Pipeline] { (Stop VM) 00:20:10.995 [Pipeline] sh 00:20:11.274 + vagrant halt 00:20:15.463 ==> default: Halting domain... 00:20:20.742 [Pipeline] sh 00:20:21.023 + vagrant destroy -f 00:20:25.221 ==> default: Removing domain... 00:20:25.234 [Pipeline] sh 00:20:25.517 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/output 00:20:25.526 [Pipeline] } 00:20:25.544 [Pipeline] // stage 00:20:25.549 [Pipeline] } 00:20:25.565 [Pipeline] // dir 00:20:25.570 [Pipeline] } 00:20:25.585 [Pipeline] // wrap 00:20:25.593 [Pipeline] } 00:20:25.605 [Pipeline] // catchError 00:20:25.613 [Pipeline] stage 00:20:25.615 [Pipeline] { (Epilogue) 00:20:25.629 [Pipeline] sh 00:20:25.912 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:28.456 [Pipeline] catchError 00:20:28.458 [Pipeline] { 00:20:28.472 [Pipeline] sh 00:20:28.753 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:29.011 Artifacts sizes are good 00:20:29.020 [Pipeline] } 00:20:29.035 [Pipeline] // catchError 00:20:29.048 [Pipeline] archiveArtifacts 00:20:29.055 Archiving artifacts 00:20:29.371 [Pipeline] cleanWs 00:20:29.382 [WS-CLEANUP] Deleting project workspace... 00:20:29.382 [WS-CLEANUP] Deferred wipeout is used... 00:20:29.388 [WS-CLEANUP] done 00:20:29.390 [Pipeline] } 00:20:29.408 [Pipeline] // stage 00:20:29.414 [Pipeline] } 00:20:29.429 [Pipeline] // node 00:20:29.435 [Pipeline] End of Pipeline 00:20:29.488 Finished: FAILURE